aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1612.02742 | 2951803737 | Hand detection is essential for many hand related tasks, e.g. parsing hand pose, understanding gesture, which are extremely useful for robotics and human-computer interaction. However, hand detection in uncontrolled environments is challenging due to the flexibility of wrist joint and cluttered background. We propose a deep learning based approach which detects hands and calibrates in-plane rotation under supervision at the same time. To guarantee the recall, we propose a context aware proposal generation algorithm which significantly outperforms the selective search. We then design a convolutional neural network(CNN) which handles object rotation explicitly to jointly solve the object detection and rotation estimation tasks. Experiments show that our method achieves better results than state-of-the-art detection models on widely-used benchmarks such as Oxford and Egohands database. We further show that rotation estimation and classification can mutually benefit each other. | These methods can be classified as two types: 1) first estimate the object pose, and then predict the object label of the image derotated with the object pose; Rowley, Baluja, and Kanade @cite_27 proposed a seminal rotation invariant neural network-based face detection. The system employs multiple networks: the first is a rotation network which processes each input window to determine its orientation, and then uses this information to prepare the window for one or more detector networks. 2) simultaneous pose estimation and detection. He, Sigal and Sclaroff @cite_14 proposed a structured formulation to jointly perform object detection and pose estimation. Fidler et. al. @cite_8 proposed a 3D object detection and viewpoint estimation with a deformable 3D cuboid model. As far as we know, less attention is paid on using convolutional neural networks to jointly model object detection and rotation estimation problems for 2D images. | {
"cite_N": [
"@cite_27",
"@cite_14",
"@cite_8"
],
"mid": [
"",
"344254576",
"2111087635"
],
"abstract": [
"",
"Object detection and pose estimation are interdependent problems in computer vision. Many past works decouple these problems, either by discretizing the continuous pose and training pose-specific object detectors, or by building pose estimators on top of detector outputs. In this paper, we propose a structured kernel machine approach to treat object detection and pose estimation jointly in a mutually benificial way. In our formulation, a unified, continuously parameterized, discriminative appearance model is learned over the entire pose space. We propose a cascaded discrete-continuous algorithm for efficient inference, and give effective online constraint generation strategies for learning our model using structural SVMs. On three standard benchmarks, our method performs better than, or on par with, state-of-the-art methods in the combined task of object detection and pose estimation.",
"This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the well-acclaimed deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. Our model reasons about face visibility patters called aspects. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. Inference then entails sliding and rotating the box in 3D and scoring object hypotheses. While for inference we discretize the search space, the variables are continuous in our model. We demonstrate the effectiveness of our approach in indoor and outdoor scenarios, and show that our approach significantly outperforms the state-of-the-art in both 2D [1] and 3D object detection [2]."
]
} |
1612.02462 | 2562368761 | We develop normalisation by evaluation (NBE) for dependent types based on presheaf categories. Our construction is formulated in the met alanguage of type theory using quotient inductive types. We use a typed presentation hence there are no preterms or realizers in our construction, and every construction respects the conversion relation. NBE for simple types uses a logical relation between the syntax and the presheaf interpretation. In our construction, we merge the presheaf interpretation and the logical relation into a proof-relevant logical predicate. We prove normalisation, completeness, stability and decidability of definitional equality. Most of the constructions were formalized in Agda. | Our proof of injectivity of type formers is reminiscent in @cite_6 and the proof of decidability of normal forms is similar to that of @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_6"
],
"mid": [
"2962805038",
"2043740265"
],
"abstract": [
"Dependently typed programs contain an excessive amount of static terms which are necessary to please the type checker but irrelevant for computation. To separate static and dynamic code, several static analyses and type systems have been put forward. We consider Pfenning's type theory with irrelevant quantification which is compatible with a type-based notion of equality that respects eta-laws. We extend Pfenning's theory to universes and large eliminations and develop its meta-theory. Subject reduction, normalization and consistency are obtained by a Kripke model over the typed equality judgement. Finally, a type-directed equality algorithm is described whose completeness is proven by a second Kripke model.",
"Decidability of definitional equality and conversion of terms into canonical form play a central role in the meta-theory of a type-theoretic logical framework. Most studies of definitional equality are based on a confluent, strongly normalizing notion of reduction. Coquand has considered a different approach, directly proving the correctness of a practical equivalance algorithm based on the shape of terms. Neither approach appears to scale well to richer languages with, for example, unit types or subtyping, and neither provides a notion of canonical form suitable for proving adequacy of encodings.In this article, we present a new, type-directed equivalence algorithm for the LF type theory that overcomes the weaknesses of previous approaches. The algorithm is practical, scales to richer languages, and yields a new notion of canonical form sufficient for adequate encodings of logical systems. The algorithm is proved complete by a Kripke-style logical relations argument similar to that suggested by Coquand. Crucially, both the algorithm itself and the logical relations rely only on the shapes of types, ignoring dependencies on terms."
]
} |
1612.02706 | 2584522665 | Standard approaches in entity identification hard-code boundary detection and type prediction into labels (e.g., John B-PER Smith I-PER) and then perform Viterbi. This has two disadvantages: 1. the runtime complexity grows quadratically in the number of types, and 2. there is no natural segment-level representation. In this paper, we propose a novel neural architecture that addresses these disadvantages. We frame the problem as multitasking, separating boundary detection and type prediction but optimizing them jointly. Despite its simplicity, this architecture performs competitively with fully structured models such as BiLSTM-CRFs while scaling linearly in the number of types. Furthermore, by construction, the model induces type-disambiguating embeddings of predicted mentions. | Our work is directly inspired by who demonstrate that a simple neural architecture based on BiLSTMs achieves state-of-the-art performance on NER with no external features. They propose two models. The first makes structured prediction of NER labels with a CRF loss (LSTM-CRF) using the conventional BIO-label scheme. The second, which performs slightly worse, uses a shift-reduce framework mirroring tansition-based dependency parsing @cite_6 . While the latter also scales linearly in the number of types and produces embeddings of predicted mentions, our approach is quite different. We frame the problem as multitasking and do not need the stack buffer data structure. Semi-Markov models @cite_1 @cite_9 explicitly incorporate the segment structure but are computationally intensive (quadratic in the sentence length). Multitasking has been shown to be effective in numerous previous works @cite_5 @cite_2 @cite_8 . This is especially true with neural networks which greatly simplify joint optimization across multiple objectives. Most of these works consider multitasking across different problems. In contrast, we decompose a single problem (NER) into two natural subtasks and perform them jointly. Particularly relevant in this regard is the parsing model of which multitasks edge prediction and classification. | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_5"
],
"mid": [
"2301095666",
"2141099517",
"2287914047",
"181643614",
"2308486447",
"2158899491"
],
"abstract": [
"We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.",
"Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character n-grams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; , 1998).",
"We introduce segmental recurrent neural networks (SRNNs) which define, given an input sequence, a joint probability distribution over segmentations of the input and labelings of the segments. Representations of the input segments (i.e., contiguous subsequences of the input) are computed by encoding their constituent tokens using bidirectional recurrent neural nets, and these \"segment embeddings\" are used to define compatibility scores with output labels. These local compatibility scores are integrated using a global semi-Markov conditional random field. Both fully supervised training -- in which segment boundaries and labels are observed -- as well as partially supervised training -- in which segment boundaries are latent -- are straightforward. Experiments on handwriting recognition and joint Chinese word segmentation POS tagging show that, compared to models that do not explicitly represent segments such as BIO tagging schemes and connectionist temporal classification (CTC), SRNNs obtain substantially higher accuracies.",
"In this paper, we propose a method for analyzing word-word dependencies using deterministic bottom-up manner using Support Vector machines. We experimented with dependency trees converted from Penn treebank data, and achieved over 90 accuracy of word-word dependency. Though the result is little worse than the most up-to-date phrase structure based parsers, it looks satisfactorily accurate considering that our parser uses no information from phrase structures.",
"We present a deep hierarchical recurrent neural network for sequence tagging. Given a sequence of words, our model employs deep gated recurrent units on both character and word levels to encode morphology and context information, and applies a conditional random field layer to predict the tags. Our model is task independent, language independent, and feature engineering free. We further extend our model to multi-task and cross-lingual joint training by sharing the architecture and parameters. Our model achieves state-of-the-art results in multiple languages on several benchmark tasks including POS tagging, chunking, and NER. We also demonstrate that multi-task and cross-lingual joint training can improve the performance in various cases.",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements."
]
} |
1612.02706 | 2584522665 | Standard approaches in entity identification hard-code boundary detection and type prediction into labels (e.g., John B-PER Smith I-PER) and then perform Viterbi. This has two disadvantages: 1. the runtime complexity grows quadratically in the number of types, and 2. there is no natural segment-level representation. In this paper, we propose a novel neural architecture that addresses these disadvantages. We frame the problem as multitasking, separating boundary detection and type prediction but optimizing them jointly. Despite its simplicity, this architecture performs competitively with fully structured models such as BiLSTM-CRFs while scaling linearly in the number of types. Furthermore, by construction, the model induces type-disambiguating embeddings of predicted mentions. | LSTMs @cite_10 , and other variants of recurrent neural networks such as GRUs @cite_3 , have recently been wildly successful in various NLP tasks @cite_12 @cite_8 @cite_3 . Since there are many detailed descriptions of LSTMs available, we omit a precise definition. For our purposes, it is sufficient to treat an LSTM as a mapping @math that takes an input vector @math and a state vector @math to output a new state vector @math . | {
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_3",
"@cite_8"
],
"mid": [
"",
"2949240516",
"1924770834",
"2301095666"
],
"abstract": [
"",
"State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures---one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.",
"In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.",
"We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese."
]
} |
1612.02559 | 2585027717 | We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems. | As a representative of the first group, Krizhevsky al @cite_15 leverage a set of label-preserving transformations, such as patch extraction + reflections, and PCA-based intensity transformations, to increase training sample size. Similar techniques are used by Zeiler and Fergus @cite_4 . In @cite_12 , Chatfield and Zisserman demonstrate that the augmentation techniques of @cite_15 are not only beneficial for training deep architectures, but shallow learning approaches equally benefit from such and schemes. | {
"cite_N": [
"@cite_15",
"@cite_4",
"@cite_12"
],
"mid": [
"",
"2952186574",
"1994002998"
],
"abstract": [
"",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"The latest generation of Convolutional Neural Networks (CNN) have achieved impressive results in challenging benchmarks on image recognition and object detection, significantly raising the interest of the community in these methods. Nevertheless, it is still unclear how different CNN methods compare with each other and with previous state-of-the-art shallow representations such as the Bag-of-Visual-Words and the Improved Fisher Vector. This paper conducts a rigorous evaluation of these new techniques, exploring different deep architectures and comparing them on a common ground, identifying and disclosing important implementation details. We identify several useful properties of CNN-based representations, including the fact that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance. We also identify aspects of deep and shallow methods that can be successfully shared. In particular, we show that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost. Source code and models to reproduce the experiments in the paper is made publicly available."
]
} |
1612.02559 | 2585027717 | We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems. | In the second category of guided-augmentation techniques, many approaches have recently been proposed. In @cite_23 , , Charalambous and Bharath employ guided-augmentation in the context of gait recognition. The authors suggest to simulate synthetic gait video data (obtained from avatars) with respect to various confounding factors (such as clothing, hair, etc.) to extend the training corpus. Similar in spirit, Rogez and Schmid @cite_29 propose an image-based synthesis engine for augmenting existing 2D human pose data by photorealistic images with greater pose variability. This is done by leveraging 3D motion capture (MoCap) data. In @cite_11 , Peng al also use 3D data, in the form of CAD models, to render synthetic images of objects (with varying pose, texture, background) that are then used to train CNNs for object detection. It is shown that synthetic data is beneficial, especially in situations where few (or no) training instances are available, but 3D CAD models are. Su al @cite_3 follow a similar pipeline of rendering images from 3D models for viewpoint estimation, however, with substantially more synthetic data obtained, , by deforming existing 3D models before rendering. | {
"cite_N": [
"@cite_29",
"@cite_3",
"@cite_23",
"@cite_11"
],
"mid": [
"2467838519",
"1591870335",
"2542381826",
"2211115409"
],
"abstract": [
"This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D Motion Capture (MoCap) data. Given a candidate 3D pose our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms the state of the art in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for in-the-wild images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images.",
"Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"",
"Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark."
]
} |
1612.02559 | 2585027717 | We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems. | Marginally related to our work, we remark that alternative approaches to implicitly learn spatial transformations have been proposed. For instance, Jaderberg al @cite_39 introduce modules that can be injected into existing deep architectures to implicitly capture spatial transformations inherent in the data, thereby improving invariance to this class of transformations. | {
"cite_N": [
"@cite_39"
],
"mid": [
"2951005624"
],
"abstract": [
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations."
]
} |
1612.02559 | 2585027717 | We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems. | While previously discussed methods essentially propose augmentation to train CNNs, our approach is different in that we perform augmentation in . Along these lines, the approach of Kwitt al @cite_21 is conceptually similar to our work. In detail, the authors suggest to learn how features change as a function of the strength of certain transient attributes (such as sunny, cloudy, or foggy) in a scene-recognition context. These models are then transferred to previously unseen data for one-shot recognition. There are, however, two key differences between their approach and ours. First, they require datasets labeled with , , all variations of an attribute for every instance of a class. We, on the other hand, make use of conventional datasets that seldom carry such extensive labeling. Second, their augmenters are simple linear regressors trained in a manner. In contrast, we learn deep non-linear models in a manner which enables a straightforward application to recognition in transfer settings. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2469981738"
],
"abstract": [
"The appearance of (outdoor) scenes changes considerably with the strength of certain transient attributes, such as \"rainy\", \"dark\" or \"sunny\". Obviously, this also affects the representation of an image in feature space, e.g., as activations at a certain CNN layer, and consequently impacts scene recognition performance. In this work, we investigate the variability in these transient attributes as a rich source of information for studying how image representations change as a function of attribute strength. In particular, we leverage a recently introduced dataset with fine-grain annotations to estimate feature trajectories for a collection of transient attributes and then show how these trajectories can be transferred to new image representations. This enables us to synthesize new data along the transferred trajectories with respect to the dimensions of the space spanned by the transient attributes. Applicability of this concept is demonstrated on the problem of oneshot recognition of scene locations. We show that data synthesized via feature trajectory transfer considerably boosts recognition performance, (1) with respect to baselines and (2) in combination with state-of-the-art approaches in oneshot learning."
]
} |
1612.02732 | 2578894191 | In this paper, we propose Transmission Control Protocol (TCP)-aware cross layer scheduling algorithms in a multipoint-to-point network such as the uplink of an IEEE 802.16 (WiMAX) network. Inadequate bandwidth allocation to a TCP flow may lead to timeout and since TCP source drops its congestion window ( @math ) immediately after a timeout, it may affect the average throughput adversely. On the other hand, since the TCP source increases its @math only linearly upon the availability of bandwidth, any excess assignment of bandwidth may remain underutilized. The proposed scheduling algorithms address this by allocating the resources based on @math and TCP timeout. Moreover, since we focus on uplink scheduling, we consider that only flow level resource requirement is communicated to the Base Station ( @math ) instead of per packet information. The schedulers also take into account the wireless channel characteristics and are thus cross layer in nature. Through exhaustive simulations, we demonstrate that the proposed schedulers exhibit enhanced throughput and fairness properties when compared to that of Round Robin (RR) scheduler under different shadowing. We demonstrate a gain between 3.5 to 15 in throughput and 15 to 25 in channel utilization over RR scheduler under different shadowing. | In @cite_0 , the authors have proposed a Token Bank Fair Queuing (TBFQ) @cite_2 based scheduler for the downlink flows of an IEEE 802.16 network. It considers location dependent channel errors while scheduling and employs credit behavior of a flow to determine a priority index. Though this scheme provides fairness, it does not guarantee any delay while scheduling. @cite_31 proposes an adaptive selective Automatic Repeat reQuest based scheduling scheme for @math applications and uses an analytical model for parameter manipulation. Though it provides a trade-off between utilization and throughout, it is more suitable for the downlink scheduling in WiMAX networks, | {
"cite_N": [
"@cite_0",
"@cite_31",
"@cite_2"
],
"mid": [
"2169875336",
"2096009795",
"2021763947"
],
"abstract": [
"Point-to-multipoint broadband wireless access (BWA) is an attractive solution to provide advanced data services to users who do not have easy access to wireline data services. However, BWA will need to support delay sensitive services as users and applications are becoming increasingly sophisticated. Wireless packet scheduling is an important part of quality-of-service (QoS) provisioning in BWA. The biggest difficulty in wireless packet scheduling is the location-dependent channel errors. Channel condition independent fair scheme (CIF) (T.S. Eugene, et al, March 1998) uses explicit leading lagging approach to compensate for the erroneous channel state, but is not able to satisfy both rapid convergent guaranteed throughput and fairness. In this paper, we propose to use the token bank fair queuing (TBFQ) scheduling algorithm as a QoS provisioning mechanism for an asymmetric broadband point-to-multipoint wireless LAN. We consider the throughput and fairness performance of TBFQ under location-dependent channel error condition with variable packet length. This work is an extension of our previous work in applying TBFQ in next generation cellular networks.",
"A fundamental problem in networking is the allocation of limited resources among the users of the network. In a traditional layered network architecture, the resource to be allocated at the medium access control (MAC) and network layers utilizes communication links, viewed as \"bit pipes\" that deliver data at a fixed rate with occasional random errors. Though this separation has many advantages, there is a growing awareness that this simple bit-pipe view is inadequate, particularly in the context of modern wireless data networks. In this article, several basic cross-layer resource allocation problems for wireless fading channels are considered. The article focuses on the characterization of fundamental performance limits while taking into account both network layer QoS and physical layer performance.",
"SUMMARY The token bank fair queuing algorithm (TBFQ) is a novel scheduling algorithm that is suitable for wireless multimedia services. The bandwidth allocation mechanism integrates the leaky bucket structure with priority handling to address the problem of providing quality-of-service (QoS) guarantees to heterogeneous applications in the next generation packet-switched wireless networks. Scheduling algorithms are often tightly integrated with the wireless medium access control (MAC) protocol. However, when heterogeneous wireless systems need to be integrated and interoperate with each other, it is desirable from the QoS provisioning standpoint to decouple scheduling algorithm from the MAC protocol. In this paper we propose a framework of seamless QoS provisioning and the application of TBFQ for uplink and downlink scheduling in wireless networks. We study its performance under a generic medium access framework that enables the algorithm to be generalized to provide QoS guarantees under various medium access schemes. We give a brief analysis of the algorithm and compare its performance with common scheduling algorithms through simulation. Our results demonstrate that TBFQ significantly increases wireless channel utilization while maintaining the same QoS, unlike many fair queuing algorithms, TBFQ does not require timestamping information of each packet arrival an impractical feature in an already resource scarce environment. This makes TBFQ suitable for wireless multimedia communication. Copyright # 2004 John Wiley & Sons, Ltd."
]
} |
1612.02732 | 2578894191 | In this paper, we propose Transmission Control Protocol (TCP)-aware cross layer scheduling algorithms in a multipoint-to-point network such as the uplink of an IEEE 802.16 (WiMAX) network. Inadequate bandwidth allocation to a TCP flow may lead to timeout and since TCP source drops its congestion window ( @math ) immediately after a timeout, it may affect the average throughput adversely. On the other hand, since the TCP source increases its @math only linearly upon the availability of bandwidth, any excess assignment of bandwidth may remain underutilized. The proposed scheduling algorithms address this by allocating the resources based on @math and TCP timeout. Moreover, since we focus on uplink scheduling, we consider that only flow level resource requirement is communicated to the Base Station ( @math ) instead of per packet information. The schedulers also take into account the wireless channel characteristics and are thus cross layer in nature. Through exhaustive simulations, we demonstrate that the proposed schedulers exhibit enhanced throughput and fairness properties when compared to that of Round Robin (RR) scheduler under different shadowing. We demonstrate a gain between 3.5 to 15 in throughput and 15 to 25 in channel utilization over RR scheduler under different shadowing. | In @cite_12 , the authors have proposed a QoS based uplink scheduling scheme in IEEE 802.16d e (WiMAX) networks. It considers end-to-end QoS, both for real-time and non real-time applications and proposes a hybrid uplink scheduling algorithm, which is a combination of Priority (P) and Earliest Due Date (E) scheduling schemes. Even though it improves the utilization of the radio resources, normalized throughput drops substantially and access delay increases exponentially as the the number of system cells increase. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2072776533"
],
"abstract": [
"The primary concern of broadband wireless technologies is to provide the end-to-end Quality of Service (QoS) for integrated real-time and non real-time applications. The main focus of the IEEE 802.16d e MAC layer is to manage the radio recourse in an efficient way. The basic functional blocks of the QoS model are addressed by the standards to support five different types of service classes. However, the detailed admission control, radio resource management and scheduling are left for implementation perspective and many research efforts are on going to assure the QoS for end customers. In this paper we propose a hybrid uplink scheduling algorithm (P+E) for subscriber station (SS), which is the combination of priority and Earliest Due Date (EDD) scheduling methods to maintain QoS and utilize the radio resource allocated by the BS in an efficient manner. Simulation results demonstrate the advantages of the proposed hybrid scheduling algorithm."
]
} |
1612.02732 | 2578894191 | In this paper, we propose Transmission Control Protocol (TCP)-aware cross layer scheduling algorithms in a multipoint-to-point network such as the uplink of an IEEE 802.16 (WiMAX) network. Inadequate bandwidth allocation to a TCP flow may lead to timeout and since TCP source drops its congestion window ( @math ) immediately after a timeout, it may affect the average throughput adversely. On the other hand, since the TCP source increases its @math only linearly upon the availability of bandwidth, any excess assignment of bandwidth may remain underutilized. The proposed scheduling algorithms address this by allocating the resources based on @math and TCP timeout. Moreover, since we focus on uplink scheduling, we consider that only flow level resource requirement is communicated to the Base Station ( @math ) instead of per packet information. The schedulers also take into account the wireless channel characteristics and are thus cross layer in nature. Through exhaustive simulations, we demonstrate that the proposed schedulers exhibit enhanced throughput and fairness properties when compared to that of Round Robin (RR) scheduler under different shadowing. We demonstrate a gain between 3.5 to 15 in throughput and 15 to 25 in channel utilization over RR scheduler under different shadowing. | In @cite_19 , the authors have illustrated the performance of TCP and UDP based applications through rigorous experiments conducted in an IEEE 802.16 deployed network as well as in test-beds. It has been observed that TCP applications suffer significantly as compared to UDP applications if the scheduling scheme does not consider the nature of TCP (TCP parameters). This key observation has encouraged us to work on scheduling schemes which are TCP-aware. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2119994812"
],
"abstract": [
"In this paper, we focus on various experiments conducted to analyze the performance of Transmission Control Protocol (TCP) and Universal Datagram Protocol (UDP) based applications in a IEEE 802.16 deployed network. We analyze the effect of Medium Access Control (MAC) and Physical layer characteristics on the performance of TCP and UDP-based applications. Our key findings are: (i) throughput achieved by TCP-based applications is lower than that of UDP-based applications, (ii) slot utilization of TCP-based applications is lower than that of UDP-based applications and (iii) throughput of TCP-based applications suffers in the presence of UDP-based applications for similar channel states. We also observe that throughput achieved by both TCP and UDP-based applications with Automatic Repeat ReQuest (ARQ) set in system are higher as compared to that of without ARQ. The findings of these experiments can be adopted while designing efficient scheduling schemes for IEEE 802.16-based network, such that higher throughput, utilization and better delay performance can be achieved."
]
} |
1612.02569 | 2565989575 | The network virtualization allows new on-demand management capabilities, in this work we demonstrate such a service, namely, on-demand efficient monitoring or anonymity. The proposed service is based on network virtualization of expanders or sparsifiers over the physical network. The defined virtual (or overlay) communication graphs coupled with a multi-hop extension of Valiant randomization based routing lets us monitor the entire traffic in the network, with a very few monitoring nodes. In particular, we show that using overlay network with expansion properties and Valiant randomized load balancing it is enough to place @math monitor nodes when the length of the overlay path (number of intermediate nodes chosen by Valiant's routing procedure) is @math . We propose two randomized routing methods to implement policies for sending messages, and we show that they facilitate efficient monitoring of the entire traffic, such that the traffic is distributed uniformly in the network, and each monitor has equiprobable view of the network flow. In terms of complex networks, our result can be interpreted as a way to enforce the same betweenness centrality to all nodes in the network. Additionally, we show that our results are useful in employing anonymity services. Thus, we propose monitoring or anonymity services, which can be deployed and shut down on-demand. Our work is the first, as far as we know, to bring such on-demand infrastructure structuring using the cloud network virtualization capability to existing monitoring or anonymity networks. We propose methods to theoretically improve services provided by existing anonymity networks, and optimize the degree of anonymity, in addition providing robustness and reliability to system usage and security. We believe that, our constructions of overlay expanders and sparsifiers weighted network are of independent interest. | We show that our network architecture is valuable for network monitoring. showed an efficient flooding scheme for generating a collaboration between a group of random walking agents who are released from different sources and at different times @cite_0 . This participation of agents results in a collaborative monitoring infrastructure, requiring only a small number of active monitors. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2160504298"
],
"abstract": [
"We pose a new and intriguing question motivated by distributed computing regarding random walks on graphs: How long does it take for several independent random walks, starting from the same vertex, to cover an entire graph? We study the cover time - the expected time required to visit every node in a graph at least once - and we show that for a large collection of interesting graphs, running many random walks in parallel yields a speed-up in the cover time that is linear in the number of parallel walks. We demonstrate that an exponential speed-up is sometimes possible, but that some natural graphs allow only a logarithmic speed-up. A problem related to ours (in which the walks start from some probablistic distribution on vertices) was previously studied in the context of space efficient algorithms for undirected s-t-connectivity and our results yield, in certain cases, an improvement upon some of the earlier bounds."
]
} |
1612.02569 | 2565989575 | The network virtualization allows new on-demand management capabilities, in this work we demonstrate such a service, namely, on-demand efficient monitoring or anonymity. The proposed service is based on network virtualization of expanders or sparsifiers over the physical network. The defined virtual (or overlay) communication graphs coupled with a multi-hop extension of Valiant randomization based routing lets us monitor the entire traffic in the network, with a very few monitoring nodes. In particular, we show that using overlay network with expansion properties and Valiant randomized load balancing it is enough to place @math monitor nodes when the length of the overlay path (number of intermediate nodes chosen by Valiant's routing procedure) is @math . We propose two randomized routing methods to implement policies for sending messages, and we show that they facilitate efficient monitoring of the entire traffic, such that the traffic is distributed uniformly in the network, and each monitor has equiprobable view of the network flow. In terms of complex networks, our result can be interpreted as a way to enforce the same betweenness centrality to all nodes in the network. Additionally, we show that our results are useful in employing anonymity services. Thus, we propose monitoring or anonymity services, which can be deployed and shut down on-demand. Our work is the first, as far as we know, to bring such on-demand infrastructure structuring using the cloud network virtualization capability to existing monitoring or anonymity networks. We propose methods to theoretically improve services provided by existing anonymity networks, and optimize the degree of anonymity, in addition providing robustness and reliability to system usage and security. We believe that, our constructions of overlay expanders and sparsifiers weighted network are of independent interest. | Our results can be used as the base to provide a flexible and robust network architecture as a service with on demand deployment. Boubendir describes an implementation of NaaS architecture with SDN-enabled NFV in @cite_28 , and shows feasible on demand dynamic network service based on SDN-enabled NFV @cite_31 . We further exploit the NV, SDN and NFV emerging technologies to enable network architecture as a service for use in private commercial networks, network and service providers, or facilities desiring flexible policy enabled networking for securing their traffic and monitor network flows for mitigation of misuse or malicious uses. | {
"cite_N": [
"@cite_28",
"@cite_31"
],
"mid": [
"2469760695",
"2465009899"
],
"abstract": [
"The sharp rise of OTT application services in recent years has called for greater application-to-network visibility. These dynamic application services represent a foremost drive in the communication ecosystem changes. However, the tight coupling between network services and network infrastructures pose challenges for network providers. They need more dynamicity in their network architectures to meet the requirements of application services. SDN and NFV is the combination of networking technologies that facilitate such an evolution. But is it sufficient to allow dynamic application-to-network interactions? In this paper, we rely on SDN-enabled NFV assets as a strong foundation to propose a dynamic and adaptable “Network-as-a-Service” architecture. We leverage our NaaS proposal with a resilient Network Exposition Layer allowing to express offers and demands of network services. To demonstrate the offered features, we propose three architecture deployment models: broad, customized and event-based dynamic model. We then instantiate our architecture within the WebRTC-based communication services, illustrating the collaboration between network providers and Communication Service Providers (CSPs).",
"We present in this paper the implementation of the networking part of a Network-as-a-Service (NaaS) architecture based on SDN-enabled NFV using as a basis the OPNFV open framework. This NaaS architecture can be instantiated according to three deployment models allowing application service providers (OTTs) and application services themselves to request specific network services from a network provider within a NaaS model. Therefore, we propose to demonstrate the feasibility of NaaS architecture deployment models with a focus on an “on-demand dynamic model” to highlight the offered dynamicity and easy application-to-network interactions. We have chosen to illustrate the deployment models with a WebRTC OTT application and showcase the on-demand dynamic deployment of virtual network functions, mainly TURN server. Because of time-sensitivity of these communication services, we have relied on container-based (Docker type) instead of virtual machine-based network functions virtualization."
]
} |
1612.02297 | 2952922798 | This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions. | The majority of the work on increasing the computational efficiency of deep convolutional networks focuses on techniques. These include decompositions of convolutional kernels @cite_25 and pruning of connections @cite_3 . Many of these techniques made their way into the design of the standard deep architectures. For example, Inception @cite_0 and ResNet @cite_9 @cite_17 use factorized convolutional kernels. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_25",
"@cite_17"
],
"mid": [
"2194775991",
"2964299589",
"2950179405",
"1996901117",
"2302255633"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5× speedup with no loss in accuracy, and 4.5× speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.",
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers."
]
} |
1612.02297 | 2952922798 | This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions. | Recently, several works have considered the problem of varying the amount of computation in computer vision. Cascaded classifiers @cite_31 @cite_11 are used in object detection to quickly reject easy'' negative proposals. Dynamic Capacity Networks @cite_33 use the same amount of computation for all images and use image classification-specific heuristic. PerforatedCNNs @cite_22 vary the amount of computation spatially but not between images. @cite_7 proposes to tune the amount of computation in a fully-connected network using a REINFORCE-trained policy which makes the optimization problem significantly more challenging. | {
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_7",
"@cite_31",
"@cite_11"
],
"mid": [
"2949234772",
"2173038751",
"2179423374",
"1934410531",
"2474389331"
],
"abstract": [
"We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones. Inspired by the loop perforation technique from source code optimization, we speed up the bottleneck convolutional layers by skipping their evaluation in some of the spatial positions. We propose and analyze several strategies of choosing these positions. We demonstrate that perforation can accelerate modern convolutional networks such as AlexNet and VGG-16 by a factor of 2x - 4x. Additionally, we show that perforation is complementary to the recently proposed acceleration method of",
"We introduce the Dynamic Capacity Network (DCN), a neural network that can adaptively assign its capacity across different portions of the input data. This is achieved by combining modules of two types: low-capacity sub-networks and high-capacity sub-networks. The low-capacity sub-networks are applied across most of the input, but also provide a guide to select a few portions of the input on which to apply the high-capacity sub-networks. The selection is made using a novel gradient-based attention mechanism, that efficiently identifies input regions for which the DCN's output is most sensitive and to which we should devote more capacity. We focus our empirical evaluation on the Cluttered MNIST and SVHN image datasets. Our findings indicate that DCNs are able to drastically reduce the number of computations, compared to traditional convolutional neural networks, while maintaining similar or even better performance.",
"Deep learning has become the state-of-art tool in many applications, but the evaluation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (, 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy. We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.",
"In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.",
"In this paper, we investigate two new strategies to detect objects accurately and efficiently using deep convolutional neural network: 1) scale-dependent pooling and 2) layerwise cascaded rejection classifiers. The scale-dependent pooling (SDP) improves detection accuracy by exploiting appropriate convolutional features depending on the scale of candidate object proposals. The cascaded rejection classifiers (CRC) effectively utilize convolutional features and eliminate negative object proposals in a cascaded manner, which greatly speeds up the detection while maintaining high accuracy. In combination of the two, our method achieves significantly better accuracy compared to other state-of-the-arts in three challenging datasets, PASCAL object detection challenge, KITTI object detection benchmark and newly collected Inner-city dataset, while being more efficient."
]
} |
1612.02297 | 2952922798 | This paper proposes a deep learning architecture based on Residual Network that dynamically adjusts the number of executed layers for the regions of the image. This architecture is end-to-end trainable, deterministic and problem-agnostic. It is therefore applicable without any modifications to a wide range of computer vision problems such as image classification, object detection and image segmentation. We present experimental results showing that this model improves the computational efficiency of Residual Networks on the challenging ImageNet classification and COCO object detection datasets. Additionally, we evaluate the computation time maps on the visual saliency dataset cat2000 and find that they correlate surprisingly well with human eye fixation positions. | BranchyNet @cite_10 is the most similar approach to ours although only applicable to classification problems. It adds classification branches to the intermediate layers of the network. As soon as the entropy of the intermediate classifications is below some threshold, the network's evaluation halts. Our preliminary experiments with a similar procedure based on ACT (using ACT to choose the number of blocks to evaluate) show that it is inferior to using less units per block. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2610140147"
],
"abstract": [
"Deep neural networks are state of the art methods for many learning tasks due to their ability to extract increasingly better features at each network layer. However, the improved performance of additional layers in a deep network comes at the cost of added latency and energy usage in feedforward inference. As networks continue to get deeper and larger, these costs become more prohibitive for real-time and energy-sensitive applications. To address this issue, we present BranchyNet, a novel deep network architecture that is augmented with additional side branch classifiers. The architecture allows prediction results for a large portion of test samples to exit the network early via these branches when samples can already be inferred with high confidence. BranchyNet exploits the observation that features learned at an early layer of a network may often be sufficient for the classification of many data points. For more difficult samples, which are expected less frequently, BranchyNet will use further or all network layers to provide the best likelihood of correct prediction. We study the BranchyNet architecture using several well-known networks (LeNet, AlexNet, ResNet) and datasets (MNIST, CIFAR10) and show that it can both improve accuracy and significantly reduce the inference time of the network."
]
} |
1612.02251 | 2949369097 | Multitask learning has been applied successfully to a range of tasks, mostly morphosyntactic. However, little is known on when MTL works and whether there are data characteristics that help to determine its success. In this paper we evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine different auxiliary tasks, amongst which a novel setup, and correlate their impact to data-dependent conditions. Our results show that MTL is not always effective, significant improvements are obtained only for 1 out of 5 tasks. When successful, auxiliary tasks with compact and more uniform label distributions are preferable. | Multitask learning has been recently explored by a number of studies, including name error recognition @cite_10 , tagging and chunking @cite_28 @cite_26 , entity and relation extraction @cite_24 , machine translation @cite_11 and machine translation quality estimation including modeling annotator bias @cite_8 @cite_34 . Most earlier work had in common that it assumed jointly labeled data (same corpus annotated with multiple labels). In contrast, in this paper we evaluate multitask training from distinct sources to address data paucity, like done recently @cite_29 @cite_23 @cite_4 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_24",
"@cite_23",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"2338266296",
"2531161093",
"2251311344",
"2952230511",
"2251332263",
"2578454709",
"2573704241",
"2471347784",
"2251300660",
"2172589779"
],
"abstract": [
"Bidirectional long short-term memory (bi-LSTM) networks have recently proven successful for various NLP sequence modeling tasks, but little is known about their reliance to input representations, target languages, data set size, and label noise. We address these issues and evaluate bi-LSTMs with word, character, and unicode byte embeddings for POS tagging. We compare bi-LSTMs to traditional POS taggers across languages and data sizes. We also present a novel bi-LSTM model, which combines the POS tagging loss function with an auxiliary loss function that accounts for rare words. The model obtains state-of-the-art performance across 22 languages, and works especially well for morphologically complex languages. Our analysis suggests that bi-LSTMs are less sensitive to training data size and label corruptions (at small noise levels) than previously assumed.",
"Keystroke dynamics have been extensively used in psycholinguistic and writing research to gain insights into cognitive processing. But do keystroke logs contain actual signal that can be used to learn better natural language processing models? We postulate that keystroke dynamics contain information about syntactic structure that can inform shallow syntactic parsing. To test this hypothesis, we explore labels derived from keystroke logs as auxiliary task in a multi-task bidirectional Long Short-Term Memory (bi-LSTM). Our results show promising results on two shallow syntactic parsing tasks, chunking and CCG supertagging. Our model is simple, has the advantage that data can come from distinct sources, and produces models that are significantly better than models trained on the text annotations alone.",
"Annotating linguistic data is often a complex, time consuming and expensive endeavour. Even with strict annotation guidelines, human subjects often deviate in their analyses, each bringing different biases, interpretations of the task and levels of consistency. We present novel techniques for learning from the outputs of multiple annotators while accounting for annotator specific behaviour. These techniques use multi-task Gaussian Processes to learn jointly a series of annotator and metadata specific models, while explicitly representing correlations between models which can be learned directly from data. Our experiments on two machine translation quality estimation datasets show uniform significant accuracy gains from multi-task learning, and consistently outperform strong baselines.",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"We consider the task of identifying and labeling the semantic arguments of a predicate that evokes a FrameNet frame. This task is challenging because there are only a few thousand fully annotated sentences for supervised training. Our approach augments an existing model with features derived from FrameNet and PropBank and with partially annotated exemplars from FrameNet. We observe a 4 absolute increase in F1 versus the original model.",
"",
"",
"",
"Out-of-vocabulary name errors in speech recognition create significant problems for downstream language processing, but the fact that they are rare poses challenges for automatic detection, particularly in an open-domain scenario. To address this problem, a multi-task recurrent neural network language model for sentence-level name detection is proposed for use in combination with out-of-vocabulary word detection. The sentence-level model is also effective for leveraging external text data. Experiments show a 26 improvement in name-error detection F-score over a system using n-gram lexical features.",
"Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three multi-task learning (MTL) settings for sequence to sequence models: (a) the oneto-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation. Our results show that training on a small amount of parsing and image caption data can improve the translation quality between English and German by up to 1.5 BLEU points over strong single-task baselines on the WMT benchmarks. Furthermore, we have established a new state-of-the-art result in constituent parsing with 93.0 F1. Lastly, we reveal interesting properties of the two unsupervised learning objectives, autoencoder and skip-thought, in the MTL context: autoencoder helps less in terms of perplexities but more on BLEU scores compared to skip-thought."
]
} |
1612.02374 | 2950616766 | Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) are neurodevelopmental conditions which impact on a significant number of children and adults. Currently, the diagnosis of such disorders is done by experts who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods for their diagnosis are not only subjective, difficult to repeat, and costly but also extremely time consuming. In this work, we present a novel methodology to aid diagnostic predictions about the presence absence of ADHD and ASD by automatic visual analysis of a person's behaviour. To do so, we conduct the questionnaires in a computer-mediated way while recording participants with modern RGBD (Colour+Depth) sensors. In contrast to previous automatic approaches which have focussed only detecting certain behavioural markers, our approach provides a fully automatic end-to-end system for directly predicting ADHD and ASD in adults. Using state of the art facial expression analysis based on Dynamic Deep Learning and 3D analysis of behaviour, we attain classification rates of 96 for Controls vs Condition (ADHD ASD) group and 94 for Comorbid (ADHD+ASD) vs ASD only group. We show that our system is a potentially useful time saving contribution to the diagnostic field of ADHD and ASD. | In @cite_15 , a system was developed for tracking people across multiple cameras and sensors. They used depth measuring cameras (Microsoft Kinect) to monitor the movement of children in a classroom setting. The authors used agglomerative hierarchical clustering to segment different objects and tracked different individuals using covariance descriptors. One of the applications they proposed for such a system would be to record the motion tracks and velocity profiles of people, to measure their activity level. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2071096319"
],
"abstract": [
"Clinical studies confirm that mental illnesses such as autism, Obsessive Compulsive Disorder (OCD), etc. show behavioral abnormalities even at very young ages; the early diagnosis of which can help steer effective treatments. Most often, the behavior of such at-risk children deviate in very subtle ways from that of a normal child; correct diagnosis of which requires prolonged and continuous monitoring of their activities by a clinician, which is a difficult and time intensive task. As a result, the development of automation tools for assisting in such monitoring activities will be an important step towards effective utilization of the diagnostic resources. In this paper, we approach the problem from a computer vision standpoint, and propose a novel system for the automatic monitoring of the behavior of children in their natural environment through the deployment of multiple non-invasive sensors (cameras and depth sensors). We provide details of our system, together with algorithms for the robust tracking of the activities of the children. Our experiments, conducted in the Shirley G. Moore Laboratory School, demonstrate the effectiveness of our methodology."
]
} |
1612.02374 | 2950616766 | Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) are neurodevelopmental conditions which impact on a significant number of children and adults. Currently, the diagnosis of such disorders is done by experts who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods for their diagnosis are not only subjective, difficult to repeat, and costly but also extremely time consuming. In this work, we present a novel methodology to aid diagnostic predictions about the presence absence of ADHD and ASD by automatic visual analysis of a person's behaviour. To do so, we conduct the questionnaires in a computer-mediated way while recording participants with modern RGBD (Colour+Depth) sensors. In contrast to previous automatic approaches which have focussed only detecting certain behavioural markers, our approach provides a fully automatic end-to-end system for directly predicting ADHD and ASD in adults. Using state of the art facial expression analysis based on Dynamic Deep Learning and 3D analysis of behaviour, we attain classification rates of 96 for Controls vs Condition (ADHD ASD) group and 94 for Comorbid (ADHD+ASD) vs ASD only group. We show that our system is a potentially useful time saving contribution to the diagnostic field of ADHD and ASD. | One of the pioneering works in the field of ASD diagnosis was done by @cite_8 . In this work, the authors developed computer vision based methods to identify certain behavioural markers based on Autism Observation Scale for Infants (AOSI) related to visual attention and motor patterns. For assessing visual attention, they focused on 3 main behavioural markers, namely sharing interest, visual tracking and disengagement of attention. These behavioural markers were detected by estimating the head pose in the left-right direction (yaw) and in the up-down direction (pitch). The head pose was estimated by tracking the position of certain facial features (eyes, nose, ear, etc.). | {
"cite_N": [
"@cite_8"
],
"mid": [
"2106131177"
],
"abstract": [
"Attention deficit hypo-activity disorder (ADHD) comprises a deficit in behavioral inhibition. A theoretical model is constructed that links inhibition to 4 executive neuropsychological functions that appear to depend on it for their effective execution: (a) working memory, (b) self-regulation of affect-motivation-arousal, (c) internalization of speech, and (d) reconstitution (behavioral analysis and synthesis). Extended to ADHD, the model predicts that ADHD should be associated with secondary impairments in these 4 executive abilities and the motor control they afford. The author reviews evidence for each of these domains of functioning and finds it to be strongest for deficits in behavioral inhibition, working memory, regulation of motivation, and motor control in those with ADHD. Although the model is promising as a potential theory of self-control and ADHD, far more research is required to evaluate its merits and the many predictions it makes about ADHD."
]
} |
1612.02374 | 2950616766 | Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) are neurodevelopmental conditions which impact on a significant number of children and adults. Currently, the diagnosis of such disorders is done by experts who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods for their diagnosis are not only subjective, difficult to repeat, and costly but also extremely time consuming. In this work, we present a novel methodology to aid diagnostic predictions about the presence absence of ADHD and ASD by automatic visual analysis of a person's behaviour. To do so, we conduct the questionnaires in a computer-mediated way while recording participants with modern RGBD (Colour+Depth) sensors. In contrast to previous automatic approaches which have focussed only detecting certain behavioural markers, our approach provides a fully automatic end-to-end system for directly predicting ADHD and ASD in adults. Using state of the art facial expression analysis based on Dynamic Deep Learning and 3D analysis of behaviour, we attain classification rates of 96 for Controls vs Condition (ADHD ASD) group and 94 for Comorbid (ADHD+ASD) vs ASD only group. We show that our system is a potentially useful time saving contribution to the diagnostic field of ADHD and ASD. | In @cite_0 , the authors presented another computer vision based approach for studying autism by retrieving social games and other forms of social interactions between adults and children in videos. They proposed to do this by defining social games as quasi-periodic spatio-temporal patterns. In order to retrieve such patterns from unstructured videos, the authors represent each frame using a histogram of spatio-temporal words derived from space-time interest points. The frames are clustered based on their histograms to represent the video as a sequence of cluster (keyframes) labels. The quasi-periodic pattern is found by searching for co-occurrences of these keyframe labels in time. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2407966565"
],
"abstract": [
"Computer vision technology has a unique opportunity to impact that study of children’s behavior, by providing a means to automatically capture behavioral data in an noninvasive manner and analyze behavioral interactions between children and their caregivers and peers. We briefly outline a research agenda in Behavior Imaging, which targets the capture and analysis of social and communicative behaviors. We present illustrative results from an on-going project on the content-based retrieval of social games between children and adults from an unstructured video corpus."
]
} |
1612.02101 | 2949561789 | We consider the task of learning a classifier for semantic segmentation using weak supervision in the form of image labels which specify the object classes present in the image. Our method uses deep convolutional neural networks (CNNs) and adopts an Expectation-Maximization (EM) based approach. We focus on the following three aspects of EM: (i) initialization; (ii) latent posterior estimation (E-step) and (iii) the parameter update (M-step). We show that saliency and attention maps, our bottom-up and top-down cues respectively, of simple images provide very good cues to learn an initialization for the EM-based algorithm. Intuitively, we show that before trying to learn to segment complex images, it is much easier and highly effective to first learn to segment a set of simple images and then move towards the complex ones. Next, in order to update the parameters, we propose minimizing the combination of the standard softmax loss and the KL divergence between the true latent posterior and the likelihood given by the CNN. We argue that this combination is more robust to wrong predictions made by the expectation step of the EM method. We support this argument with empirical and visual results. Extensive experiments and discussions show that: (i) our method is very simple and intuitive; (ii) requires only image-level labels; and (iii) consistently outperforms other weakly-supervised state-of-the-art methods with a very high margin on the PASCAL VOC 2012 dataset. | Work in weakly-supervised semantic segmentation has explored varying levels of supervision including combinations of image labels @cite_15 @cite_7 @cite_24 @cite_22 , annotated points @cite_37 , squiggles @cite_27 @cite_6 , and bounding boxes @cite_7 . Papandreou al @cite_7 employ an EM-based approach with supervision from image labels and bounding boxes. Their method iterates between inferring a latent segmentation (E-step) and optimizing the parameters of a segmentation network (M-step) by treating the inferred latents as the ground-truth segmentation. Similarly, @cite_22 train an initial network using saliency maps, following which a more powerful network is trained using the output of the initial network. The mil frameworks of @cite_18 and @cite_25 use fully convolutional networks to learn pixel-level semantic segmentations from only image labels. The image labels, however, provide no information about the position of the objects in an image. To address this, localization cues can be used @cite_18 @cite_31 , obtained through indirect methods like bottom-up proposal generation (for example, mcg @cite_32 ), and saliency- @cite_22 and attention-based @cite_35 mechanisms. Localization cues can also be obtained directly through point- and squiggle-level annotations @cite_37 @cite_27 @cite_6 . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_18",
"@cite_31",
"@cite_22",
"@cite_7",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_27",
"@cite_15",
"@cite_25"
],
"mid": [
"",
"2949145768",
"1945608308",
"2520746254",
"2133515615",
"1529410181",
"1991367009",
"1927251054",
"2952004933",
"2337429362",
"2306289963",
"1931270512"
],
"abstract": [
"",
"The semantic image segmentation task presents a trade-off between test time accuracy and training-time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain, image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of 12.9 mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget.",
"We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.",
"Training neural networks for semantic segmentation is data hungry. Meanwhile annotating a large number of pixel-level segmentation masks needs enormous human effort. In this paper, we propose a framework with only image-level supervision. It unifies semantic segmentation and object localization with important proposal aggregation and selection modules. They greatly reduce the notorious error accumulation problem that commonly arises in weakly supervised learning. Our proposed training algorithm progressively improves segmentation performance with augmented feedback in iterations. Our method achieves decent results on the PASCAL VOC 2012 segmentation data, outperforming previous image-level supervised methods by a large margin.",
"Recently, significant improvement has been made on semantic object segmentation due to the development of deep convolutional neural networks (DCNNs). Training such a DCNN usually relies on a large number of images with pixel-level segmentation masks, and annotating these images is very costly in terms of both finance and human effort. In this paper, we propose a simple to complex (STC) framework in which only image-level annotations are utilized to learn DCNNs for semantic segmentation. Specifically, we first train an initial segmentation network called Initial-DCNN with the saliency maps of simple images (i.e., those with a single category of major object(s) and clean background). These saliency maps can be automatically obtained by existing bottom-up salient object detection techniques, where no supervision information is needed. Then, a better network called Enhanced-DCNN is learned with supervision from the predicted segmentation masks of simple images based on the Initial-DCNN as well as the image-level annotations. Finally, more pixel-level segmentation masks of complex images (two or more categories of objects with cluttered background), which are inferred by using Enhanced-DCNN and image-level annotations, are utilized as the supervision information to learn the Powerful-DCNN for semantic segmentation. Our method utilizes 40K simple images from Flickr.com and 10K complex images from PASCAL VOC for step-wisely boosting the segmentation network. Extensive experimental results on PASCAL VOC 2012 segmentation benchmark well demonstrate the superiority of the proposed STC framework compared with other state-of-the-arts.",
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"Despite the promising performance of conventional fully supervised algorithms, semantic segmentation has remained an important, yet challenging task. Due to the limited availability of complete annotations, it is of great interest to design solutions for semantic segmentation that take into account weakly labeled data, which is readily available at a much larger scale. Contrasting the common theme to develop a different algorithm for each type of weak annotation, in this work, we propose a unified approach that incorporates various forms of weak supervision - image level tags, bounding boxes, and partial labels - to produce a pixel-wise labeling. We conduct a rigorous evaluation on the challenging Siftflow dataset for various weakly labeled settings, and show that our approach outperforms the state-of-the-art by 12 on per-class accuracy, while maintaining comparable per-pixel accuracy.",
"We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.",
"Large-scale data is of crucial importance for learning semantic segmentation models, but annotating per-pixel masks is a tedious and inefficient procedure. We note that for the topic of interactive image segmentation, scribbles are very widely used in academic research and commercial software, and are recognized as one of the most userfriendly ways of interacting. In this paper, we propose to use scribbles to annotate images, and develop an algorithm to train convolutional networks for semantic segmentation supervised by scribbles. Our algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. We present competitive object semantic segmentation results on the PASCAL VOC dataset by using scribbles as annotations. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCALCONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http: research.microsoft.com en-us um people jifdai downloads scribble_sup.",
"We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak localization cues, to expand objects based on the information about which classes can occur in an image, and to constrain the segmentations to coincide with object boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations.",
"Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained end-to-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge."
]
} |
1612.02101 | 2949561789 | We consider the task of learning a classifier for semantic segmentation using weak supervision in the form of image labels which specify the object classes present in the image. Our method uses deep convolutional neural networks (CNNs) and adopts an Expectation-Maximization (EM) based approach. We focus on the following three aspects of EM: (i) initialization; (ii) latent posterior estimation (E-step) and (iii) the parameter update (M-step). We show that saliency and attention maps, our bottom-up and top-down cues respectively, of simple images provide very good cues to learn an initialization for the EM-based algorithm. Intuitively, we show that before trying to learn to segment complex images, it is much easier and highly effective to first learn to segment a set of simple images and then move towards the complex ones. Next, in order to update the parameters, we propose minimizing the combination of the standard softmax loss and the KL divergence between the true latent posterior and the likelihood given by the CNN. We argue that this combination is more robust to wrong predictions made by the expectation step of the EM method. We support this argument with empirical and visual results. Extensive experiments and discussions show that: (i) our method is very simple and intuitive; (ii) requires only image-level labels; and (iii) consistently outperforms other weakly-supervised state-of-the-art methods with a very high margin on the PASCAL VOC 2012 dataset. | Our method is most similar to the EM-based approach of @cite_7 . We use saliency and attention maps to learn a network for a simplified semantic segmentation task which provides better initialization of the EM algorithm. This is in contrast to @cite_7 where a network trained for a classification task is used as initialization. Also different from @cite_7 where the latent posterior is approximated by a Dirac delta function (which we argue is too harsh of a constraint in a weakly-supervised setting), we instead propose to use the combination of the true posterior distribution and the Dirac delta function to learn the parameters. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1529410181"
],
"abstract": [
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL"
]
} |
1612.01734 | 2560835640 | With the growing use of popular social media services like Facebook and Twitter it is hard to collect all content from the networks without access to the core infrastructure or paying for it. Thus, ... | There is a lack of research concerning the quality of data in social media and social network research. There are studies on social media and social networks, mainly using data from Twitter. These data are, however, typically collected using Twitter's free garden hose API with a risk of being unbalanced and an unrepresentative sample of the complete data. Studies that investigate the quality of social media data include , where the former addresses how social media data from online recommendation systems can be evaluated. Sampling studies of social networks are quite common, including that uses the original graph sampling study by Leskovec and Faloutsos @cite_9 as a baseline. Wang:2015hh presents an interesting study on how to efficiently sample a social network with a limited budget. The study uses metrics of the graph to make informed decisions on how to transverse it. More recently presents algorithms to sample weighted networks. @cite_11 @cite_11 further investigate network sampling methods and how to minimize the number of required queries. | {
"cite_N": [
"@cite_9",
"@cite_11"
],
"mid": [
"2146008005",
"2336754337"
],
"abstract": [
"Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph.",
"Random walk is an important tool in many graph mining applications including estimating graph parameters, sampling portions of the graph, and extracting dense communities. In this paper we consider the problem of sampling nodes from a large graph according to a prescribed distribution by using random walk as the basic primitive. Our goal is to obtain algorithms that make a small number of queries to the graph but output a node that is sampled according to the prescribed distribution. Focusing on the uniform distribution case, we study the query complexity of three algorithms and show a near-tight bound expressed in terms of the parameters of the graph such as average degree and the mixing time. Both theoretically and empirically, we show that some algorithms are preferable in practice than the others. We also extend our study to the problem of sampling nodes according to some polynomial function of their degrees; this has implications for designing efficient algorithms for applications such as triangle counting."
]
} |
1612.01734 | 2560835640 | With the growing use of popular social media services like Facebook and Twitter it is hard to collect all content from the networks without access to the core infrastructure or paying for it. Thus, ... | On the topic of graph and social media crawling, @cite_7 present ways to evaluate and understand the data generated in social media. However, many social media crawling studies are obsolete due to updates by Facebook regarding of the default privacy policy of users' content, which makes it impossible by default to access Facebook users' content . Consequently, the amount of private Facebook data that can be collected is severely limited. Furthermore, since Facebook does not sell any of its data there is a need for crawling methods that collect social interaction data from publicly available sources, which is the main motivation for this work. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1774848501"
],
"abstract": [
"The growth of social media over the last decade has revolutionized the way individuals interact and industries conduct business. Individuals produce data at an unprecedented rate by interacting, sharing, and consuming content through social media. Understanding and processing this new type of data to glean actionable patterns presents challenges and opportunities for interdisciplinary research, novel algorithms, and tool development. Social Media Mining integrates social media, social network analysis, and data mining to provide a convenient and coherent platform for students, practitioners, researchers, and project managers to understand the basics and potentials of social media mining. It introduces the unique problems arising from social media data and presents fundamental concepts, emerging issues, and effective algorithms for network analysis and data mining. Suitable for use in advanced undergraduate and beginning graduate courses as well as professional short courses, the text contains exercises of different degrees of difficulty that improve understanding and help apply concepts, principles, and methods in various scenarios of social media mining."
]
} |
1612.01734 | 2560835640 | With the growing use of popular social media services like Facebook and Twitter it is hard to collect all content from the networks without access to the core infrastructure or paying for it. Thus, ... | @cite_2 discuss different methods to transverse social networks from a crawling perspective by focusing on public groups rather than individual users' profiles. Our approach mainly differs from this study in two ways. First, we do not create a social network to transverse and only treat the social media as data, i.e., our proposed method does not require any knowledge of the underlying network. Secondly, we focus on user interactions represented as so called Social Interaction Network (SIN) graphs @cite_0 as it shows the interactions between users in various communities, i.e., SIN graphs can represent interactions of all users on one particular newsgroup or users interacting on a specific topic. To conclude, there are no prior studies, according to our literature review, that address the challenge of collecting data from Facebook after Facebook changed the default privacy policy of its end-users' content. Most studies use online data repositories and do not address the issue of how to efficiently collect data directly from Facebook, or other social media sites. | {
"cite_N": [
"@cite_0",
"@cite_2"
],
"mid": [
"2068140991",
"1986435040"
],
"abstract": [
"Online Social Networks (OSNs) are popular platforms for interaction, communication and collaboration between friends. In this paper we develop and present a new platform to make interactions in OSNs accessible. Most of today's social networks, including Facebook, Twitter, and Google+ provide support for third party applications to use their social network graph and content. Such applications are strongly dependent on the set of software tools and libraries provided by the OSNs for their own development and growth. For example, third party companies like CNN provide recommendation materials based on user interactions and user's relationship graph. One of the limitations with this graph (or APIs) is the segregation from the shared content. We believe, and present in this paper, that the content shared and the actions taken on the content, creates a Social Interaction Network (SIN). As such, we extend Facebook's current API in order to allow applications to retrieve a weighted graph instead of Facebooks unweighted graph. Finally, we evaluate the proposed platform based on completeness and speed of the crawled results from selected community pages. We also give a few example uses of our API on how it can be used by third party applications.",
"In new generation social networks, we expect that the paradigm of Social Internetworking Systems (SISs) will become progressively more important. Indeed, the possibility of interconnecting users and resources of different social networks enables a lot of strategic applications whose main strength is the integration of different communities that nevertheless preserves their diversity and autonomy. In this new scenario, the role of Social Network Analysis is crucial in studying the evolution of structures, individuals, interactions, and so on, and in extracting powerful knowledge from them. But the preliminary step to do is designing a good way to crawl the underlying graph. Although this aspect has been deeply investigated in the field of social networks, it is an open issue when moving towards SISs. Indeed, we cannot expect that a crawling strategy, specifically designed for social networks, is still valid in a Social Internetworking Scenario, due to its specific topological features. In this paper, we confirm the above claim, giving a strong motivation for our second contribution, which is the definition of a new crawling strategy. This strategy, specifically conceived for SISs, is shown to fully overcome the drawbacks of the state-of-the-art crawling strategies."
]
} |
1612.01936 | 2951533962 | We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly capture variations in data due to latent task nuisance variables. We demonstrate that max-sum inference in the DRMM yields an algorithm that exactly reproduces the operations in deep convolutional neural networks (DCNs), providing a first principles derivation. Our framework provides new insights into the successes and shortcomings of DCNs as well as a principled route to their improvement. DRMM training via the Expectation-Maximization (EM) algorithm is a powerful alternative to DCN back-propagation, and initial training results are promising. Classification based on the DRMM and other variants outperforms DCNs in supervised digit classification, training 2-3x faster while achieving similar accuracy. Moreover, the DRMM is applicable to semi-supervised and unsupervised learning tasks, achieving results that are state-of-the-art in several categories on the MNIST benchmark and comparable to state of the art on the CIFAR10 benchmark. | Hierarchical Generative Models. The DRMM is closely related to several hierarchical models, including the Deep Mixture of Factor Analyzers @cite_7 and the Deep Gaussian Mixture Model @cite_8 . | {
"cite_N": [
"@cite_7",
"@cite_8"
],
"mid": [
"2953106669",
"2148464528"
],
"abstract": [
"An efficient way to learn deep density models that have many layers of latent variables is to learn one layer at a time using a model that has only one layer of latent variables. After learning each layer, samples from the posterior distributions for that layer are used as training data for learning the next layer. This approach is commonly used with Restricted Boltzmann Machines, which are undirected graphical models with a single hidden layer, but it can also be used with Mixtures of Factor Analysers (MFAs) which are directed graphical models. In this paper, we present a greedy layer-wise learning algorithm for Deep Mixtures of Factor Analysers (DMFAs). Even though a DMFA can be converted to an equivalent shallow MFA by multiplying together the factor loading matrices at different levels, learning and inference are much more efficient in a DMFA and the sharing of each lower-level factor loading matrix by many different higher level MFAs prevents overfitting. We demonstrate empirically that DMFAs learn better density models than both MFAs and two types of Restricted Boltzmann Machine on a wide variety of datasets.",
"Generative models can be seen as the swiss army knives of machine learning, as many problems can be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning may lie in Deep Learning methods, given their success in supervised learning. However, one of the current problems with deep unsupervised learning methods, is that they often are harder to scale. As a result there are some easier, more scalable shallow methods, such as the Gaussian Mixture Model and the Student-t Mixture Model, that remain surprisingly competitive. In this paper we propose a new scalable deep generative model for images, called the Deep Gaussian Mixture Model, that is a straightforward but powerful generalization of GMMs to multiple layers. The parametrization of a Deep GMM allows it to efficiently capture products of variations in natural images. We propose a new EM-based algorithm that scales well to large datasets, and we show that both the Expectation and the Maximization steps can easily be distributed over multiple machines. In our density estimation experiments we show that deeper GMM architectures generalize better than more shallow ones, with results in the same ballpark as the state of the art."
]
} |
1612.01928 | 2584667682 | Modern automatic speech recognition (ASR) systems need to be robust under acoustic variability arising from environmental, speaker, channel, and recording conditions. Ensuring such robustness to variability is a challenge in modern day neural network-based ASR systems, especially when all types of variability are not seen during training. We attempt to address this problem by encouraging the neural network acoustic model to learn invariant feature representations. We use ideas from recent research on image generation using Generative Adversarial Networks and domain adaptation ideas extending adversarial gradient-based training. A recent work from proposes to use adversarial training for image domain adaptation by using an intermediate representation from the main target classification network to deteriorate the domain classifier performance through a separate neural network. Our work focuses on investigating neural architectures which produce representations invariant to noise conditions for ASR. We evaluate the proposed architecture on the Aurora-4 task, a popular benchmark for noise robust ASR. We show that our method generalizes better than the standard multi-condition training especially when only a few noise categories are seen during training. | Several practical guidelines were proposed for optimizing GANs in @cite_6 and further explored in @cite_3 . Prior work by @cite_2 proposed a method of training a network which can be adapted to new domains. The training data consists of the images labeled with classes of interest and separate domain (image background) labels. The network has a @math -like structure: the image is fed to the first network which produces a hidden representation @math . Then this representation @math is input to two separate networks: a domain classifier network (D) and a target classifier network (R). The goal of training is to learn the hidden representation that is invariant to the domain labels and performs well on the target classification task, so that the domain information doesn't interfere with the target classifier at test time. Similar to the GAN objective, which forces the generation distribution be close to the data distribution, the makes domain distributions similar to each other. | {
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_2"
],
"mid": [
"2432004435",
"2173520492",
""
],
"abstract": [
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
""
]
} |
1612.01680 | 2952653024 | Application of formal models provides many benefits for the software and system development, however, the learning curve of formal languages could be a critical factor for an industrial project. Thus, a natural language specification that reflects all the aspects of the formal model might help to understand the model and be especially useful for the stakeholders who do not know the corresponding formal language. Moreover, an automated generation of the documentation from the model would replace manual updates of the documentation for the cases the model is modified. This paper presents an ongoing work on generating natural language specifications from formal models. Our goal is to generate documentation in English from the basic modelling artefacts, such as data types, state machines, and architectural components. To allow further formal analysis of the generated specification, we restrict English to its subset, Attempto Controlled English. | As several attempts have been made to automate the requirement capture, there is another approach for the automatic construction of Object-oriented design model in UML diagram from natural language requirement specification. Mala and Uma @cite_33 present a methodology that utilizes the automatic reference resolution and eliminates the user intervention. The input problem statement is split into sentences for tagging by sentence splitter in order to get parts of speech for every word. The nouns and verbs are then identified by tagged texts based on simple phrasal grammars. Reference resolver is used to remove ambiguity by pronouns. The final text is then simplified by the normaliser for mapping the words into object-oriented system elements. The result produced by the system is compared with human output on the basic analysis of the text. The approach is promising to introduce a method to restructure the natural language text into modelling language in respect of system requirements specifications. Although there is a shortage of the efficiency in the tagger and reference resolver that result in unnatural expressions and misunderstandings, it can be improved by building a knowledge base for the system elements generation. | {
"cite_N": [
"@cite_33"
],
"mid": [
"1755440691"
],
"abstract": [
"Application of natural language understanding to requirements gathering remains a field that has only limited explorations so far. This paper presents an approach to extract the object oriented elements of the required system. This approach starts with assigning the parts of speech tags to each word in the given input document. Further, to resolve the ambiguity posed by the pronouns, the pronoun resolutions are performed before normalizing the text. Finally the elements of the object-oriented system namely the classes, the attributes, methods and relationships between the classes, sequence of actions, the use-cases and actors are identified by mapping the 'parts of speech- tagged' words onto the Object Oriented Modeling Language elements using mapping rules which is the key to a successful implementation of user requirements."
]
} |
1612.01680 | 2952653024 | Application of formal models provides many benefits for the software and system development, however, the learning curve of formal languages could be a critical factor for an industrial project. Thus, a natural language specification that reflects all the aspects of the formal model might help to understand the model and be especially useful for the stakeholders who do not know the corresponding formal language. Moreover, an automated generation of the documentation from the model would replace manual updates of the documentation for the cases the model is modified. This paper presents an ongoing work on generating natural language specifications from formal models. Our goal is to generate documentation in English from the basic modelling artefacts, such as data types, state machines, and architectural components. To allow further formal analysis of the generated specification, we restrict English to its subset, Attempto Controlled English. | Bryant @cite_6 suggested the theory of Two-Level Grammar for natural language requirements specification, in conjunction with Specification Development Environment to allow user interaction to refine model concepts. This approach allows the automation of the process of transition from requirements to design and implementation, as well as producing an understandable document on which software system will base on. Ilieva and Ormandjieva @cite_28 proposed an approach on transition of natural language software requirements specification into formal presentation. The authors decided their method into three main processing parts: (1) the Linguistic Component as the text sentences to be analysed; (2) the Semantic Network as the formal NL presentation; and (3) modelling as the final phase of formal presentation of the specification. However, the approach of Ilieva and Ormandjieva involves manual human analysis process, to break down problems into smaller parts that are easily understood. | {
"cite_N": [
"@cite_28",
"@cite_6"
],
"mid": [
"1599004321",
"1881220596"
],
"abstract": [
"Software requirements specification is a critical activity of the software process, as errors at this stage inevitably lead to problems later on in system design and implementation. The requirements are written in natural language, with the potential for ambiguity, contradiction or misunderstanding, or simply an inability of developers to deal with a large amount of information. This paper proposes a methodology for the natural language processing of textual descriptions of the requirements of an unlimited natural language and their automatic mapping to the object-oriented analysis model.",
"A methodology is proposed for the formal development of software systems from a user's requirements specification in natural language into a complete implementation, proceeding through the steps of formal specification, and detailed design and implementation in an automated manner. Our methodology is based upon the theories of two-level grammar (TLG) and object-oriented design and takes advantage of the existence of several existing software design tools. We develop an iterative transformation process from the natural language specification into the final implementation, with a specification development environment to assist the user, as well as the designer in accomplishing this transformation. Our underlying formal specification methodology may also be used in the final development of the implementation. This methodology is a major advance in object-oriented software development and the software engineering process in general."
]
} |
1612.01627 | 2954161227 | We study response selection for multi-turn conversation in retrieval-based chatbots. Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information. We propose a sequential matching network (SMN) to address both problems. SMN first matches a response with each utterance in the context on multiple levels of granularity, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models relationships among utterances. The final matching score is calculated with the hidden states of the RNN. An empirical study on two public data sets shows that SMN can significantly outperform state-of-the-art methods for response selection in multi-turn conversation. | Recently, building a chatbot with data driven approaches @cite_3 @cite_1 has drawn significant attention. Existing work along this line includes retrieval-based methods @cite_28 @cite_1 @cite_23 @cite_16 @cite_17 @cite_9 @cite_19 and generation-based methods @cite_2 @cite_6 @cite_12 @cite_26 @cite_24 @cite_8 @cite_4 . Our work is a retrieval-based method, in which we study context-based response selection. Early studies of retrieval-based chatbots focus on response selection for single-turn conversation @cite_27 @cite_1 @cite_23 @cite_17 . Recently, researchers have begun to pay attention to multi-turn conversation. For example, match a response with the literal concatenation of context utterances. concatenate context utterances with the input message as reformulated queries and perform matching with a deep neural network architecture. improve multi-turn response selection with a multi-view model including an utterance view and a word view. Our model is different in that it matches a response with each utterance at first and accumulates matching information instead of sentences by a GRU, thus useful information for matching can be sufficiently retained. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"1958706068",
"2418993857",
"2521114121",
"2951359136",
"2561368124",
"295828404",
"10957333",
"889023230",
"2311783643",
"2123395566",
"2102531443",
"1910529161",
"2159640018",
"2339852062",
"1591706642",
"2346566861"
],
"abstract": [
"Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., \"I don't know\") regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.",
"We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens. There are many ways to estimate or learn the high-level coarse tokens, but we argue that a simple extraction procedure is sufficient to capture a wealth of high-level discourse semantics. Such procedure allows training the multiresolution recurrent neural network by maximizing the exact joint log-likelihood over both sequences. In contrast to the standard log- likelihood objective w.r.t. natural language tokens (word perplexity), optimizing the joint log-likelihood biases the model towards modeling high-level abstractions. We apply the proposed model to the task of dialogue response generation in two challenging domains: the Ubuntu technical support domain, and Twitter conversations. On Ubuntu, the model outperforms competing approaches by a substantial margin, achieving state-of-the-art results according to both automatic evaluation metrics and a human evaluation study. On Twitter, the model appears to generate more relevant and on-topic responses according to automatic evaluation metrics. Finally, our experiments demonstrate that the proposed model is more adept at overcoming the sparsity of natural language and is better able to capture long-term structure.",
"We consider incorporating topic information into the sequence-to-sequence framework to generate informative and interesting responses for chatbots. To this end, we propose a topic aware sequence-to-sequence (TA-Seq2Seq) model. The model utilizes topics to simulate prior knowledge of human that guides them to form informative and interesting responses in conversation, and leverages the topic information in generation by a joint attention mechanism and a biased generation probability. The joint attention mechanism summarizes the hidden vectors of an input message as context vectors by message attention, synthesizes topic vectors by topic attention from the topic words of the message obtained from a pre-trained LDA model, and let these vectors jointly affect the generation of words in decoding. To increase the possibility of topic words appearing in responses, the model modifies the generation probability of topic words by adding an extra probability item to bias the overall distribution. Empirical study on both automatic evaluation metrics and human annotations shows that TA-Seq2Seq can generate more informative and interesting responses, and significantly outperform the-state-of-the-art response generation models.",
"Semantic matching is of central importance to many natural language tasks bordes2014semantic,RetrievalQA . A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models.",
"",
"Human computer conversation is regarded as one of the most difficult problems in artificial intelligence. In this paper, we address one of its key sub-problems, referred to as short text conversation, in which given a message from human, the computer returns a reasonable response to the message. We leverage the vast amount of short conversation data available on social media to study the issue. We propose formalizing short text conversation as a search problem at the first step, and employing state-of-the-art information retrieval (IR) techniques to carry out the task. We investigate the significance as well as the limitation of the IR approach. Our experiments demonstrate that the retrieval-based model can make the system behave rather \"intelligently\", when combined with a huge repository of conversation data from social media.",
"We present a data-driven approach to generating responses to Twitter status posts, based on phrase-based Statistical Machine Translation. We find that mapping conversational stimuli onto responses is more difficult than translating between languages, due to the wider range of possible responses, the larger fraction of unaligned words phrases, and the presence of large phrase pairs whose alignment cannot be further decomposed. After addressing these challenges, we compare approaches based on SMT and Information Retrieval in a human evaluation. We show that SMT outperforms IR on this task, and its output is preferred over actual human responses in 15 of cases. As far as we are aware, this is the first work to investigate the use of phrase-based SMT to directly translate a linguistic stimulus into an appropriate response.",
"We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.",
"We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speaker-addressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges.",
"Learning to chat is a fascinating machine learning task with many applications from user-modeling to artificial intelligence. However, most of the work to date relies on designing large hard-wired sets of rules. On the other hand, the growth of social networks on the web provides large quantities of conversational data, suggesting that the time is ripe to train chatbots in a more data driven way. A first step is to learn to chat by ranking the response repository to provide responses that are consistent with the user’s expectations. Here we use a three phase ranking approach for predicting suitable responses to a query in a conversation. Sentences are first filtered, then efficiently ranked, and then more precisely re-ranked in order to select the most suitable response. The filtering is done using part-of-speech tagging, hierarchical clustering, and entropy analysis methods. The first phase ranking is performed by generating a large set of high-level grammatical and conceptual features, exploiting dictionaries and similarity measurement resources such as wikipedia similarity graphs, and by ranking using a boosted regression tree (MART) classifier. The more precise (conceptual) ranking is performed by designing more conceptual features obtained from similarity measurement resources such as query refinement and suggestion systems, sentence paraphrasing techniques, LDA topic modeling and structural clustering, and entropy analysis over wikipedia similarity graphs. The sentences are then ranked according to the confidence of a Transfer AdaBoost classifier, trained using transfer-learning methods in which a classification over a large corpus of noisy twitter and live-journal data is considered as the source domain, and the collaborative ranking of actively collected conversations, which are labeled in an online framework using user feedback, is considered as the destination domain. We give results on the performance of each step, and on the accuracy of our three phase ranking framework.",
"Natural language conversation is widely regarded as a highly difficult problem, which is usually attacked with either rule-based or learning-based models. In this paper we propose a retrieval-based automatic response model for short-text conversation, to exploit the vast amount of short conversation instances available on social media. For this purpose we introduce a dataset of short-text conversation based on the real-world instances from Sina Weibo (a popular Chinese microblog service), which will be soon released to public. This dataset provides rich collection of instances for the research on finding natural and relevant short responses to a given short text, and useful for both training and testing of conversation models. This dataset consists of both naturally formed conversations, manually labeled data, and a large repository of candidate responses. Our preliminary experiments demonstrate that the simple retrieval-based conversation model performs reasonably well when combined with the rich instances in our dataset.",
"Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts. We propose a new approach to the problem, called Deep Match Tree (DeepMatch @math ), under a general setting. The approach consists of two components, 1) a mining algorithm to discover patterns for matching two short-texts, defined in the product space of dependency trees, and 2) a deep neural network for matching short texts using the mined patterns, as well as a learning algorithm to build the network having a sparse structure. We test our algorithm on the problem of matching a tweet and a response in social media, a hard matching problem proposed in [, 2013], and show that DeepMatch @math can outperform a number of competitor models including one without using dependency trees and one based on word-embedding, all with large margins",
"We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation. NRM takes the general encoder-decoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The NRM is trained with a large amount of one-round conversation data collected from a microblogging service. Empirical study shows that NRM can generate grammatically correct and content-wise appropriate responses to over 75 of the input text, outperforming state-of-the-arts in the same setting, including retrieval-based and SMT-based models.",
"To establish an automatic conversation system between humans and computers is regarded as one of the most hardcore problems in computer science, which involves interdisciplinary techniques in information retrieval, natural language processing, artificial intelligence, etc. The challenges lie in how to respond so as to maintain a relevant and continuous conversation with humans. Along with the prosperity of Web 2.0, we are now able to collect extremely massive conversational data, which are publicly available. It casts a great opportunity to launch automatic conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will be able to find at least some responses from the massive repository for any user inputs. Given a human issued message, i.e., query, our system would provide a reply after adequate training and learning of how to respond. In this paper, we propose a retrieval-based conversation system with the deep learning-to-respond schema through a deep neural network framework driven by web data. The proposed model is general and unified for different conversation scenarios in open domain. We incorporate the impact of multiple data inputs, and formulate various features and factors with optimization into the deep learning framework. In the experiments, we investigate the effectiveness of the proposed deep neural network structures with better combinations of all different evidence. We demonstrate significant performance improvement against a series of standard and state-of-art baselines in terms of p@1, MAP, nDCG, and MRR for conversational purposes.",
"Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.",
"Message response matching is an important task within retrieval-based chatbots. We present a topic augmented neural network(TANN), comprising a sentence embedding layer, a topic embedding layer, and a matching layer, to match messages and response candidates. TANN inherits the benefits of neutral networks on matching sentence pairs, and leverages extra topic information and their corresponding weights as prior knowledge into a matching process. In TANN, the sentence embedding layer embeds an input message and a response into a vector space, while the topic embedding layer forms a topic vector by a linear combination of the embedding of topic words whose weights are determined by both themselves and the message vector. The message vector, the response vector, and the topic vector are then fed to the matching layer to calculate a matching score. The extensive evaluation of TANN, using large human annotated data sets, shows that TANN outperforms simple neutral network methods, while beating other typical matching models with a large margin."
]
} |
1612.01593 | 2952195300 | The surge of mobile data traffic forces network operators to cope with capacity shortage. The deployment of small cells in 5G networks is meant to reduce latency, backhaul traffic and increase radio access capacity. In this context, mobile edge computing technology will be used to manage dedicated cache space in the radio access network. Thus, mobile network operators will be able to provision OTT content providers with new caching services to enhance the quality of experience of their customers on the move. In turn, the cache memory in the mobile edge network will become a shared resource. Hence, we study a competitive caching scheme where contents are stored at given price set by the mobile network operator. We first formulate a resource allocation problem for a tagged content provider seeking to minimize the expected missed cache rate. The optimal caching policy is derived accounting for popularity and availability of contents, the spatial distribution of small cells, and the caching strategies of competing content providers. It is showed to induce a specific order on contents to be cached based on their popularity and availability. Next, we study a game among content providers in the form of a generalized Kelly mechanism with bounded strategy sets and heterogeneous players. Existence and uniqueness of the Nash equilibrium are proved. Finally, extensive numerical results validate and characterize the performance of the model. | In @cite_6 the authors consider a device-to-device (D2D) network and derive throughput scaling laws under cache coding and spatial reuse. Content delay is optimized in @cite_3 by performing joint routing and caching, whereas in @cite_4 a distributed matching scheme based on the deferred acceptance algorithm provides association of users and SC base stations based on latency figures. Similarly to our model, in @cite_5 SC base stations are distributed according to a Poisson point process. Contents to be cached minimize a cost which depends on the expected number of missed cache hits. | {
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_4",
"@cite_6"
],
"mid": [
"1767905185",
"2949272603",
"2003892785",
"2090082294"
],
"abstract": [
"A heterogenous network with base stations (BSs), small base stations (SBSs), and users distributed according to independent Poisson point processes is considered. SBS nodes are assumed to possess high storage capacity and to form a distributed caching network. Popular files are stored in local caches of SBSs, so that a user can download the desired files from one of the SBSs in its vicinity. The offloading-loss is captured via a cost function that depends on the random caching strategy proposed here. The popularity profile of cached content is unknown and estimated using instantaneous demands from users within a specified time interval. An estimate of the cost function is obtained from which an optimal random caching strategy is devised. The training time to achieve an @math difference between the achieved and optimal costs is finite provided the user density is greater than a predefined threshold, and scales as @math , where @math is the support of the popularity profile. A transfer learning-based approach to improve this estimate is proposed. The training time is reduced when the popularity profile is modeled using a parametric family of distributions; the delay is independent of @math and scales linearly with the dimension of the distribution parameter.",
"We investigate the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay. Here, content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access a piece of content, a user must decide whether to route its request to a cache or to the back-end server. Additionally, caches must decide which content to cache. We investigate the problem complexity of two problem formulations, where the direct path to the back-end server is modeled as i) a congestion-sensitive or ii) a congestion-insensitive path, reflecting whether or not the delay of the uncached path to the back-end server depends on the user request load, respectively. We show that the problem is NP-complete in both cases. We prove that under the congestion-insensitive model the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify a structural property of the user-cache graph that potentially makes the problem NP-complete. For the congestion-sensitive model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both models within a (1-1 e) factor of the optimal solution, and demonstrate a greedy algorithm that is found to be within 1 of optimal for small problem sizes. Through trace-driven simulations we evaluate the performance of our greedy algorithms, which show up to a 50 reduction in average delay over solutions based on LRU content caching.",
"Anticipating multimedia file requests via caching at the small cell base stations (SBSs) of a cellular network has emerged as a promising technique for optimizing the quality of service (QoS) of wireless user equipments (UEs). However, developing efficient caching strategies must properly account for specific small cell constraints, such as backhaul congestion and limited storage capacity. In this paper, we address the problem of devising a usercell association, in which the SBSs exploit caching capabilities to overcome the backhaul capacity limitations and enhance the users’ QoS. In the proposed approach, the SBSs individually decide on which UEs to service based on both content availability and on the data rates they can deliver, given the interference and backhaul capacity limitations. We formulate the problem as a oneto-many matching game between SBSs and UEs. To solve this game, we propose a distributed algorithm, based on the deferred acceptance scheme, that enables the players (i.e., UEs and SBSs) to self-organize into a stable matching, in a reasonable number of algorithm iterations. Simulation results show that the proposed cell association scheme yields significant gains, reaching up to 21 improvement compared to a traditional cell association techniques with no caching considerations.",
"We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop, users make arbitrary requests from a finite library of possible files and user devices cache information in the form of linear combinations of packets from the files in the library (coded caching). We consider the combined effect of coding in the caching and delivery phases, achieving \"coded multicast gain\", and of spatial reuse due to local short-range D2D communication. Somewhat counterintuitively, we show that the coded multicast gain and the spatial reuse gain do not cumulate, in terms of the throughput scaling laws. In particular, the spatial reuse gain shown in our previous work on uncoded random caching and the coded multicast gain shown in this paper yield the same scaling laws behavior, but no further scaling law gain can be achieved by using both coded caching and D2D spatial reuse."
]
} |
1612.01593 | 2952195300 | The surge of mobile data traffic forces network operators to cope with capacity shortage. The deployment of small cells in 5G networks is meant to reduce latency, backhaul traffic and increase radio access capacity. In this context, mobile edge computing technology will be used to manage dedicated cache space in the radio access network. Thus, mobile network operators will be able to provision OTT content providers with new caching services to enhance the quality of experience of their customers on the move. In turn, the cache memory in the mobile edge network will become a shared resource. Hence, we study a competitive caching scheme where contents are stored at given price set by the mobile network operator. We first formulate a resource allocation problem for a tagged content provider seeking to minimize the expected missed cache rate. The optimal caching policy is derived accounting for popularity and availability of contents, the spatial distribution of small cells, and the caching strategies of competing content providers. It is showed to induce a specific order on contents to be cached based on their popularity and availability. Next, we study a game among content providers in the form of a generalized Kelly mechanism with bounded strategy sets and heterogeneous players. Existence and uniqueness of the Nash equilibrium are proved. Finally, extensive numerical results validate and characterize the performance of the model. | In @cite_13 a model for caching contents over a D2D network is proposed. A convex optimization problem is obtained and solved using a dual optimization algorithm. In our formulation we have obtained closed form solutions and properties of the optimal cost function. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2092851923"
],
"abstract": [
"We consider a mobile content delivery network (mCDN) in which special mobile devices designated as caching servers (caching-server device: CSD) can provide mobile stations with popular contents on demand via device-to-device (D2D) communication links. On the assumption that mobile CSD's are randomly distributed by a Poisson point process (PPP), an optimization problem is formulated to determine the probability of storing the individual content in each server in a manner that minimizes the average caching failure rate. Further, we present a low-complexity search algorithm, optimum dual-solution searching algorithm (ODSA), for solving this optimization problem. We demonstrate that the proposed ODSA takes fewer iterations, on the order of О (log N) searches, for caching N contents in the system to find the optimal solution, as compared to the number of iterations in the conventional subgradient method, with an acceptable accuracy in practice. Furthermore, we identify the important characteristics of the optimal caching policies in the mobile environment that would serve as a useful aid in designing the mCDN."
]
} |
1612.01593 | 2952195300 | The surge of mobile data traffic forces network operators to cope with capacity shortage. The deployment of small cells in 5G networks is meant to reduce latency, backhaul traffic and increase radio access capacity. In this context, mobile edge computing technology will be used to manage dedicated cache space in the radio access network. Thus, mobile network operators will be able to provision OTT content providers with new caching services to enhance the quality of experience of their customers on the move. In turn, the cache memory in the mobile edge network will become a shared resource. Hence, we study a competitive caching scheme where contents are stored at given price set by the mobile network operator. We first formulate a resource allocation problem for a tagged content provider seeking to minimize the expected missed cache rate. The optimal caching policy is derived accounting for popularity and availability of contents, the spatial distribution of small cells, and the caching strategies of competing content providers. It is showed to induce a specific order on contents to be cached based on their popularity and availability. Next, we study a game among content providers in the form of a generalized Kelly mechanism with bounded strategy sets and heterogeneous players. Existence and uniqueness of the Nash equilibrium are proved. Finally, extensive numerical results validate and characterize the performance of the model. | In @cite_7 a coded caching strategy is developed to optimize contents' placement based on SC association patterns. In @cite_22 a Stackelberg game is investigated to study a caching system consisting of a content provider and multiple network providers. In that model, the content providers lease their videos to the network providers to gain profit and network providers aim to save the backhaul costs by caching popular videos. In @cite_11 the authors model a wireless content distribution system where contents are replicated at multiple access points -- depending on popularity -- so as to maximally create network-coding opportunities during delivery. Finally, @cite_23 proposes proactive caching in order to take advantage of contents' popularity. The scheme we develop in this work can also be applied to proactive caching. | {
"cite_N": [
"@cite_23",
"@cite_22",
"@cite_7",
"@cite_11"
],
"mid": [
"2051773775",
"2344268109",
"1972458861",
""
],
"abstract": [
"This article explores one of the key enablers of beyond 4G wireless networks leveraging small cell network deployments, proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context awareness, and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands via caching at base stations and users' devices. In order to show the effectiveness of proactive caching, we examine two case studies that exploit the spatial and social structure of the network, where proactive caching plays a crucial role. First, in order to alleviate backhaul congestion, we propose a mechanism whereby files are proactively cached during off-peak periods based on file popularity and correlations among user and file patterns. Second, leveraging social networks and D2D communications, we propose a procedure that exploits the social structure of the network by predicting the set of influential users to (proactively) cache strategic contents and disseminate them to their social ties via D2D communications. Exploiting this proactive caching paradigm, numerical results show that important gains can be obtained for each case study, with backhaul savings and a higher ratio of satisfied users of up to 22 and 26 percent, respectively. Higher gains can be further obtained by increasing the storage capability at the network edge.",
"Evidence indicates that downloading on-demand videos accounts for a dramatic increase in data traffic over cellular networks. Caching popular videos in the storage of small-cell base stations (SBSs), namely, small-cell caching, is an efficient technology for mitigating redundant data transmissions over backhaul channels in heterogeneous networks (HetNets). In this paper, we consider a commercialized small-cell caching system consisting of a video retailer (VR), multiple network service providers (NSPs), and mobile users (MUs). The VR leases its popular videos to the NSPs to make profits, and the NSPs, after placing these videos to their SBSs, can efficiently reduce the repetitive video transmissions over their backhaul channels. We study such a system within the framework of the Stackelberg game. We first model the MUs and SBSs as two independent Poisson point processes (PPPs) and develop the probability of the event that an MU can obtain the demanded video directly from the memory of an SBS. Then, based on the derived probability, we formulate a Stackelberg game to maximize jointly the average profit of the VR and the NSPs. Moreover, we investigate the Stackelberg equilibrium (SE) via solving an optimization problem. Numerical results are provided for verifying the proposed framework by showing its effectiveness on pricing and resource allocation.",
"Caching has emerged as a vital tool in modern communication systems for reducing peak data rates by allowing popular files to be pre-fetched and stored locally at end users' devices. With the shift in paradigm from homogeneous cellular networks to the heterogeneous ones, the concept of data offloading to small cell base stations (sBS) has garnered significant attention. Caching at these small cell base stations has recently been proposed, where popular files are pre-fetched and stored locally in order to avoid bottlenecks in the limited capacity backhaul connection link to the core network. In this paper, we study distributed caching strategies in such a heterogeneous small cell wireless network from a reinforcement learning perspective. Using state of the art results, it can be shown that the optimal joint cache content placement in the sBSs turns out to be a NP-hard problem even when the sBS's are aware of the popularity profile of the files that are to be cached. To address this problem, we propose a coded caching framework, where the sBSs learn the popularity profile of the files (based on their demand history) via a combinatorial multi-armed bandit framework. The sBSs then pre-fetch segments of the Fountain-encoded versions of the popular files at regular intervals to serve users' requests. We show that the proposed coded caching framework can be modeled as a linear program that takes into account the network connectivity and thereby jointly designs the caching strategies. Numerical results are presented to show the benefits of the joint coded caching technique over naive decentralized cache placement strategies.",
""
]
} |
1612.01941 | 2584957963 | When faced with complex choices, users refine their own preference criteria as they explore the catalogue of options. In this paper we propose an approach to preference elicitation suited for this scenario. We extend Coactive Learning, which iteratively collects manipulative feedback, to optionally query example critiques. User critiques are integrated into the learning model by dynamically extending the feature space. Our formulation natively supports constructive learning tasks, where the option catalogue is generated on-the-fly. We present an upper bound on the average regret suffered by the learner. Our empirical analysis highlights the promise of our approach. | Coactive Learning ( ) is an interaction model for learning user preferences from observable behavior @cite_16 , recently employed in learning to rank and online structured prediction tasks @cite_14 @cite_13 . For an overview of the method, see the next section. The underlying weight learning procedure can range from a simple perceptron @cite_9 to more specialized online learners @cite_14 . Further extensions include support for approximate inference @cite_2 and multi-task learning @cite_1 . These extensions are orthogonal to our main contribution, and may prove useful when used in tandem. However, in this paper, we only consider the original formulation, for simplicity. Our approach inherits several perks from , including a theoretical characterization of the average regret @cite_14 and native support for constructive tasks. The main difference between the two methods, which is also our main contribution, is that in the feature space grows dynamically through critiquing interaction. instead works with a static feature space, and is therefore incapable of handling users with varying preference criteria. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_2",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2040870580",
"2268703685",
"23751767",
"2953266551",
"2250296445"
],
"abstract": [
"",
"",
"In this paper we investigate the use of coactive learning in a multitask setting. In coactive learning, an expert presents the learner with a problem and the learner returns a candidate solution. The expert then improves on the solution if necessary and presents the improved solution to the learner. The goal for the learner is to learn to produce solutions which cannot be further improved by the expert while minimizing the average expert effort. In this paper, we consider the setting where there are multiple experts (tasks), and in each iteration one expert presents a problem to the learner. While the experts are expected to have different solution preferences, they are also assumed to share similarities, which should enable generalization across experts. We analyze several algorithms for this setting and derive bounds on the average expert effort during learning. Our main contribution is the balanced Perceptron algorithm, which is the first coactive learning algorithm that is both able to generalize across experts when possible, while also guaranteeing convergence to optimal solutions for individual experts. Our experiments in three domains confirm that this algorithm is effective in the multitask setting, compared to natural baselines.",
"Coactive learning is an online problem solving setting where the solutions provided by a solver are interactively improved by a domain expert, which in turn drives learning. In this paper we extend the study of coactive learning to problems where obtaining a globally optimal or near-optimal solution may be intractable or where an expert can only be expected to make small, local improvements to a candidate solution. The goal of learning in this new setting is to minimize the cost as measured by the expert effort over time. We first establish theoretical bounds on the average cost of the existing coactive Perceptron algorithm. In addition, we consider new online algorithms that use cost-sensitive and Passive-Aggressive (PA) updates, showing similar or improved theoretical bounds. We provide an empirical evaluation of the learners in various domains, which show that the Perceptron based algorithms are quite effective and that unlike the case for online classification, the PA algorithms do not yield significant performance gains.",
"We propose Coactive Learning as a model of interaction between a learning system and a human user, where both have the common goal of providing results of maximum utility to the user. At each step, the system (e.g. search engine) receives a context (e.g. query) and predicts an object (e.g. ranking). The user responds by correcting the system if necessary, providing a slightly improved -- but not necessarily optimal -- object as feedback. We argue that such feedback can often be inferred from observable user behavior, for example, from clicks in web-search. Evaluating predictions by their cardinal utility to the user, we propose efficient learning algorithms that have @math average regret, even though the learning algorithm never observes cardinal utility values as in conventional online learning. We demonstrate the applicability of our model and learning algorithms on a movie recommendation task, as well as ranking for web-search.",
"We present a theoretical analysis of online parameter tuning in statistical machine translation (SMT) from a coactive learning view. This perspective allows us to give regret and generalization bounds for latent perceptron algorithms that are common in SMT, but fall outside of the standard convex optimization scenario. Coactive learning also introduces the concept of weak feedback, which we apply in a proofof-concept experiment to SMT, showing that learning from feedback that consists of slight improvements over predictions leads to convergence in regret and translation error rate. This suggests that coactive learning might be a viable framework for interactive machine translation. Furthermore, we find that surrogate translations replacing references that are unreachable in the decoder search space can be interpreted as weak feedback and lead to convergence in learning, if they admit an underlying linear model."
]
} |
1612.01657 | 2582169224 | The query-by-image video retrieval (QBIVR) task has been attracting considerable research attention recently. However, most existing methods represent a video by either aggregating or projecting all its frames into a single datum point, which may easily cause severe information loss. In this paper, we propose an efficient QBIVR framework to enable an effective and efficient video search with image query. We first define a similarity-preserving distance metric between an image and its orthogonal projection in the subspace of the video, which can be equivalently transformed to a Maximum Inner Product Search (MIPS) problem. Besides, to boost the efficiency of solving the MIPS problem, we propose two asymmetric hashing schemes, which bridge the domain gap of images and videos. The first approach, termed Inner-product Binary Coding (IBC), preserves the inner relationships of images and videos in a common Hamming space. To further improve the retrieval efficiency, we devise a Bilinear Binary Coding (BBC) approach, which employs compact bilinear projections instead of a single large projection matrix. Extensive experiments have been conducted on four real-world video datasets to verify the effectiveness of our proposed approaches as compared to the state-of-the-arts. | By preserving invariant domain with low-dimensional structure information and then projecting affinity matrix into a datum point, an easier similarity-preserving measurement between images and videos was proposed for the QBIVR task by using approximate nearest neighbors (ANN) search methods @cite_25 . Meanwhile, several powerful hashing @cite_28 ,i.e., supervised hashing, semi-supervised hashing, unsupervised hashing did bring light to ANN search problem for pursuing efficiency. Although supervised hashing methods have demonstrated promising performance in some applications with semantic labels, it's troublesome or even impossible to get semantic labels in many real-life applications. Besides, the learning process is by far more complex and time-consuming than unsupervised techniques especially dealing with high-resolution videos or images. Some classical unsupervised methods include Spectral Hashing (SH) @cite_26 , preserving the Euclidean distance in the database; Inductive Manifold Hashing (IMH) @cite_16 , adopting manifold learning techniques to better model the intrinsic structure embedded in the feature space; Iterative Quantization(ITQ) @cite_24 , focusing on minimizing quantization error during unsupervised training. Other noticeable unsupervised hashing methods including anchor graph hashing (AGH) @cite_2 and scalable graph hashing with feature transformation (SGH) @cite_14 directly exploit the similarity to guide the hashing code learning procedure and achieve a good performance. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_28",
"@cite_24",
"@cite_2",
"@cite_16",
"@cite_25"
],
"mid": [
"",
"2402125293",
"2519051215",
"1974647172",
"2251864938",
"",
"2008995227"
],
"abstract": [
"",
"Hashing has been widely used for approximate nearest neighbor (ANN) search in big data applications because of its low storage cost and fast retrieval speed. The goal of hashing is to map the data points from the original space into a binary-code space where the similarity (neighborhood structure) in the original space is preserved. By directly exploiting the similarity to guide the hashing code learning procedure, graph hashing has attracted much attention. However, most existing graph hashing methods cannot achieve satisfactory performance in real applications due to the high complexity for graph modeling. In this paper, we propose a novel method, called scalable graph hashing with feature transformation (SGH), for large-scale graph hashing. Through feature transformation, we can effectively approximate the whole graph without explicitly computing the similarity graph matrix, based on which a sequential learning method is proposed to learn the hash functions in a bitwise manner. Experiments on two datasets with one million data points show that our SGH method can outperform the state-of-the-art methods in terms of both accuracy and scalability.",
"Due to the significant reduction in computational cost and storage, hashing techniques have gained increasing interests in facilitating large-scale cross-view retrieval tasks. Most cross-view hashing methods are developed by assuming that data from different views are well paired, e.g., text-image pairs. In real-world applications, however, this fully-paired multiview setting may not be practical. The more practical yet challenging semi-paired cross-view retrieval problem, where pairwise correspondences are only partially provided, has less been studied. In this paper, we propose an unsupervised hashing method for semi-paired cross-view retrieval, dubbed semi-paired discrete hashing (SPDH). In specific, SPDH explores the underlying structure of the constructed common latent subspace, where both paired and unpaired samples are well aligned. To effectively preserve the similarities of semi-paired data in the latent subspace, we construct the cross-view similarity graph with the help of anchor data pairs. SPDH jointly learns the latent features and hash codes with a factorization-based coding scheme. For the formulated objective function, we devise an efficient alternating optimization algorithm, where the key binary code learning problem is solved in a bit-by-bit manner with each bit generated with a closed-form solution. The proposed method is extensively evaluated on four benchmark datasets with both fully-paired and semi-paired settings and the results demonstrate the superiority of SPDH over several other state-of-the-art methods in term of both accuracy and scalability.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.",
"",
"Subspaces offer convenient means of representing information in many pattern recognition, machine vision, and statistical learning applications. Contrary to the growing popularity of subspace representations, the problem of efficiently searching through large subspace databases has received little attention in the past. In this paper, we present a general solution to the problem of Approximate Nearest Subspace search. Our solution uniformly handles cases where the queries are points or subspaces, where query and database elements differ in dimensionality, and where the database contains subspaces of different dimensions. To this end, we present a simple mapping from subspaces to points, thus reducing the problem to the well-studied Approximate Nearest Neighbor problem on points. We provide theoretical proofs of correctness and error bounds of our construction and demonstrate its capabilities on synthetic and real data. Our experiments indicate that an approximate nearest subspace can be located significantly faster than the nearest subspace, with little loss of accuracy."
]
} |
1612.01657 | 2582169224 | The query-by-image video retrieval (QBIVR) task has been attracting considerable research attention recently. However, most existing methods represent a video by either aggregating or projecting all its frames into a single datum point, which may easily cause severe information loss. In this paper, we propose an efficient QBIVR framework to enable an effective and efficient video search with image query. We first define a similarity-preserving distance metric between an image and its orthogonal projection in the subspace of the video, which can be equivalently transformed to a Maximum Inner Product Search (MIPS) problem. Besides, to boost the efficiency of solving the MIPS problem, we propose two asymmetric hashing schemes, which bridge the domain gap of images and videos. The first approach, termed Inner-product Binary Coding (IBC), preserves the inner relationships of images and videos in a common Hamming space. To further improve the retrieval efficiency, we devise a Bilinear Binary Coding (BBC) approach, which employs compact bilinear projections instead of a single large projection matrix. Extensive experiments have been conducted on four real-world video datasets to verify the effectiveness of our proposed approaches as compared to the state-of-the-arts. | Although the above hashing methods can efficiently deal with complexity of computational cost and storage, the different modalities of images and videos are neglected which can cause domain gap between images and videos. Some work have already focused on the domain gap. One solution proposed by Yan @cite_12 , dubbed Hashing across Euclidean Space and Riemannian Manifold (HER), learns hash functions in a max-margin framework across Euclidean space and Riemannian manifold, but becomes unsuitable for large-scale database owing to unaffordable time when dimension grows. Shirvastava and Li @cite_23 also proposed an Asymmetric Locality-Sensitive Hashing (ALSH) which performs a simple asymmetric transformation on data pairs for different learning. Inspired by their work and a dimensionality reduction bilinear projections method by Gong @cite_5 , we aim to seek a more powerful asymmetric binary learning framework which properly balances QBIVR task and high quality hashing based on subspace learning. | {
"cite_N": [
"@cite_5",
"@cite_23",
"@cite_12"
],
"mid": [
"2162064258",
"2126754439",
"1959016151"
],
"abstract": [
"Recent advances in visual recognition indicate that to achieve good retrieval and classification accuracy on large-scale datasets like Image Net, extremely high-dimensional visual descriptors, e.g., Fisher Vectors, are needed. We present a novel method for converting such descriptors to compact similarity-preserving binary codes that exploits their natural matrix structure to reduce their dimensionality using compact bilinear projections instead of a single large projection matrix. This method achieves comparable retrieval and classification accuracy to the original descriptors and to the state-of-the-art Product Quantization approach while having orders of magnitude faster code generation time and smaller memory footprint.",
"Recently it was shown that the problem of Maximum Inner Product Search (MIPS) is efficient and it admits provably sub-linear hashing algorithms. Asymmetric transformations before hashing were the key in solving MIPS which was otherwise hard. In the prior work, the authors use asymmetric transformations which convert the problem of approximate MIPS into the problem of approximate near neighbor search which can be efficiently solved using hashing. In this work, we provide a different transformation which converts the problem of approximate MIPS into the problem of approximate cosine similarity search which can be efficiently solved using signed random projections. Theoretical analysis show that the new scheme is significantly better than the original scheme for MIPS. Experimental evaluations strongly support the theoretical findings.",
"Retrieving videos of a specific person given his her face image as query becomes more and more appealing for applications like smart movie fast-forwards and suspect searching. It also forms an interesting but challenging computer vision task, as the visual data to match, i.e., still image and video clip are usually represented quite differently. Typically, face image is represented as point (i.e., vector) in Euclidean space, while video clip is seemingly modeled as a point (e.g., covariance matrix) on some particular Riemannian manifold in the light of its recent promising success. It thus incurs a new hashing-based retrieval problem of matching two heterogeneous representations, respectively in Euclidean space and Riemannian manifold. This work makes the first attempt to embed the two heterogeneous spaces into a common discriminant Hamming space. Specifically, we propose Hashing across Euclidean space and Riemannian manifold (HER) by deriving a unified framework to firstly embed the two spaces into corresponding reproducing kernel Hilbert spaces, and then iteratively optimize the intra- and inter-space Hamming distances in a max-margin framework to learn the hash functions for the two spaces. Extensive experiments demonstrate the impressive superiority of our method over the state-of-the-art competitive hash learning methods."
]
} |
1612.01675 | 2951353051 | Cloud computing provides a great opportunity for scientists, as it enables large-scale experiments that cannot are too long to run on local desktop machines. Cloud-based computations can be highly parallel, long running and data-intensive, which is desirable for many kinds of scientific experiments. However, to unlock this power, we need a user-friendly interface and an easy-to-use methodology for conducting these experiments. For this reason, we introduce here a formal model of a cloud-based platform and the corresponding open-source implementation. The proposed solution allows to conduct experiments without having a deep technical understanding of cloud-computing, HPC, fault tolerance, or data management in order to leverage the benefits of cloud computing. In the current version, we have focused on biophysics and structural chemistry experiments, based on the analysis of big data from synchrotrons and atomic force microscopy. The domain experts noted the time savings for computing and data management, as well as user-friendly interface. | While developing the model, we focused on its understandability and readability aspects. There are several approaches on model readability, cf. @cite_27 @cite_18 @cite_21 @cite_9 @cite_30 . The development of formal models and architectures for system involved in cloud computing is a more recent area of system engineering, cf. @cite_26 , @cite_20 , @cite_32 , @cite_22 , @cite_0 , @cite_1 . Several approaches have proposed the data stream processing systems for clouds, e.g., @cite_8 introduces an approach towards component-based stream processing in clouds, @cite_10 presents a work on data stream sharing. Yusuf and Schmidt have shown that the fault-tolerance is best achieved by reflecting the computational flow in such complex scientific system architectures, cf. @cite_28 . | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_0",
"@cite_27",
"@cite_10",
"@cite_20"
],
"mid": [
"2949606206",
"1937768634",
"",
"2131646073",
"2070801922",
"2097164471",
"",
"1600135326",
"1670903213",
"2005133256",
"",
"2791154369",
"1524726339",
"2134837619"
],
"abstract": [
"The ability to capture different levels of abstraction in a system model is especially important for remote integration, testing verification, and manufacturing of cyber-physical systems (CPSs). However, the complexity of modelling and testing of CPSs makes these processes extremely prone to human error. In this paper we present our ongoing work on introducing human-centred considerations into modelling and testing of CPSs, which allow for agile iterative refinement processes of different levels of abstraction when errors are discovered or missing information is completed.",
"Despite that formal and informal quality aspects are of significantimportance to business process modeling, there is only little empiricalwork reported on process model quality and its impact factors. Inthis paper we investigate understandability as a proxy for quality of processmodels and focus on its relations with personal and model characteristics.We used a questionnaire in classes at three European universitiesand generated several novel hypotheses from an exploratory data analysis.Furthermore, we interviewed practitioners to validate our findings.The results reveal that participants tend to exaggerate the differences inmodel understandability, that self-assessment of modeling competenceappears to be invalid, and that the number of arcs in models has animportant influence on understandability.",
"",
"Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area.",
"In earlier work, we reported on modeling of stream processing in terms of distributed components (as exemplified in the EU CoreGrid project ProActive), showing how a stream processing system can be built from components in composition, with dynamic reconfiguration and distributed management of the streams. In this paper, we introduce the Web Service Stream Deployer (WSSD), for the remote establishment and deployment of streams across widely distributed resources, allowing a user to set-up, control and reconfigure a stream remotely and dynamically. We demonstrate this concept with streams on a cloud testbed, using Nimbus cloud infrastructure at the University of Chicago. In particular, we show that our web-services based WSSD exhibits minimal adverse latency effect when used over an intercontinental network to manipulate, from our client in Australia, a cloud-based stream in the USA; we believe that this represents a useful mode of remote interaction with cloud-based applications. We outline some ideas in scheduling of cloud-based streams; we present a model of interaction with Gridbus, a widely used framework for exploring different scheduling algorithms (such as economic scheduling), whereby scheduling criteria expressed in Gridbus can be used to automatically deploy stream components.",
"Cloud computing presents a unique opportunity for science and engineering with benefits compared to traditional high-performance computing, especially for smaller compute jobs and entry-level users to parallel computing. However, doubts remain for production high-performance computing in the cloud, the so-called science cloud, as predictable performance, reliability and therefore costs remain elusive for many applications. This paper uses parameterised architectural patterns to assist with fault tolerance and cost predictions for science clouds, in which a single job typically holds many virtual machines for a long time, communication can involve massive data movements, and buffered streams allow parallel processing to proceed while data transfers are still incomplete. We utilise predictive models, simulation and actual runs to estimate run times with acceptable accuracy for two of the most common architectural patterns for data-intensive scientific computing: MapReduce and Combinational Logic. Run times are fundamental to understand fee-for-service costs of clouds. These are typically charged by the hour and the number of compute nodes or cores used. We evaluate our models using realistic cloud experiments from collaborative physics research projects and show that proactive and reactive fault tolerance is manageable, predictable and composable, in principle, especially at the architectural level.",
"",
"Modularity is a widely advocated strategy for handling complexity in conceptual models. Nevertheless, a systematic literature review revealed that it is not yet entirely clear under which circumstances modularity is most beneficial. Quite the contrary, empirical findings are contradictory, some authors even show that modularity can lead to decreased model understandability. In this work, we draw on insights from cognitive psychology to develop a framework for assessing the impact of hierarchy on model understandability. In particular, we identify abstraction and the split-attention effect as two opposing forces that presumably mediate the influence of modularity. Based on our framework, we describe an approach to estimate the impact of modularization on understandability and discuss implications for experiments investigating the impact of modularization on conceptual models.",
"Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text reference introduces the fundamental principles and techniques underlying grids, clouds and virtualization technologies, as well as reviewing the latest research and expected future developments in the field. Readers are guided through the key topics by internationally recognized experts, enabling them to develop their understanding of an area likely to play an ever more significant role in coming years. Topics and features: presents contributions from an international selection of experts in the field; provides a thorough introduction and overview of existing technologies in grids, clouds and virtualization, including a brief history of the field; examines the basic requirements for performance isolation of virtual machines on multi-core servers, analyzing a selection of system virtualization technologies; examines both business and scientific applications of grids and clouds, including their use in the life sciences and for high-performance computing; explores cloud building technologies, architectures for enhancing grid infrastructures with cloud computing, and cloud performance; discusses energy aware grids and clouds, workflows on grids and clouds, and cloud and grid programming models. This useful text will enable interested readers to familiarize themselves with the key topics of grids, clouds and virtualization, and to contribute to new advances in the field. Researchers, undergraduate and graduate students, system designers and programmers, and IT policy makers will all benefit from the material covered.",
"Even though the technology faces several significant challenges, many vendors and industry observers predict a bright future for cloud computing.",
"",
"In Software Engineering many text-based languages and supporting tools are used, forming the grammarware technical space. Currently model driven engineering is the new emerging paradigm for software engineering, which uses modelling languages and tools, forming the modelware technical space. Transition to the new technical space and interoperability between these two technical spaces is needed in many development scenarios. Building a bridge between these two technical spaces is a tedious task, that has to be repeated for each language to be transformed. Therefore, we propose a generic bridge between grammarware and modelware technical spaces, that can generate a specific bridge based on the EBNF of a given language semi-automatically. The generation comprises of two steps, (1) automatic generation of metamodel corresponding to the EBNF and (2) annotations to provide the additional semantics not captured by the EBNF. The generated bridge is capable of bi-directional transformations between sentences (programs) and corresponding models and can be used in re-engineering applications and for integration of text-based and model-based tools.",
"Recent research efforts in the fields of data stream processing and data stream management systems (DSMSs) show the increasing importance of processing data streams, e. g., in the e-science domain. Together with the advent of peer-to-peer (P2P) networks and grid computing, this leads to the necessity of developing new techniques for distributing and processing continuous queries over data streams in such networks. In this paper, we present a novel approach for optimizing the integration, distribution, and execution of newly registered continuous queries over data streams in grid-based P2P networks. We introduce Windowed XQuery (WXQuery), our XQuery-based subscription language for continuous queries over XML data streams supporting window-based operators. Concentrating on filtering and window-based aggregation, we present our stream sharing algorithms as well as experimental evaluation results from the astrophysics application domain to assess our approach.",
"Summary form only given. Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distributed, managed and owned by various organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, and dynamic resource conditions. In these dynamic distributed computing environments, it is hard and challenging to carry out resource management design studies in a repeatable and controlled manner as resources and users are autonomous and distributed across multiple organizations with their own policies. Therefore, simulations have emerged as the most feasible technique for analyzing policies for resource allocation. This paper presents emerging trends in distributed computing and their promises for revolutionizing the computing field, and identifies distinct characteristics and challenges in building them. We motivate opportunities for modeling and simulation communities and present our discrete-event grid simulation toolkit, called GridSim, used by researchers world-wide for investigating the design of utility-oriented computing systems such as data centers and grids. We present various case studies on the use of GridSim in modeling and simulation of business grids, parallel applications scheduling, workflow scheduling, and service pricing and revenue management."
]
} |
1612.01675 | 2951353051 | Cloud computing provides a great opportunity for scientists, as it enables large-scale experiments that cannot are too long to run on local desktop machines. Cloud-based computations can be highly parallel, long running and data-intensive, which is desirable for many kinds of scientific experiments. However, to unlock this power, we need a user-friendly interface and an easy-to-use methodology for conducting these experiments. For this reason, we introduce here a formal model of a cloud-based platform and the corresponding open-source implementation. The proposed solution allows to conduct experiments without having a deep technical understanding of cloud-computing, HPC, fault tolerance, or data management in order to leverage the benefits of cloud computing. In the current version, we have focused on biophysics and structural chemistry experiments, based on the analysis of big data from synchrotrons and atomic force microscopy. The domain experts noted the time savings for computing and data management, as well as user-friendly interface. | There are different types of scientific workflow systems such as Kepler @cite_29 , Taverna @cite_14 and Galaxy @cite_16 , which are designed to allow researchers to build their own workflows. The is that our platform provides drop-in components, Smart Connectors, for existing workflow engines: @math researchers can utilise and adapt existing Smart Connectors; @math new types of Smart Connectors would be developed within the framework if necessary. From our best knowledge, there is no other framework having this advantage. SCs are geared toward providing power and flexibility over simplicity. | {
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_16"
],
"mid": [
"",
"2172143128",
"2001938458"
],
"abstract": [
"",
"Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The myGrid project has developed the Taverna Workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists' experimental context. The lessons reflect an evolving understanding of life scientists' requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science. Copyright © 2005 John Wiley & Sons, Ltd.",
"As next-generation sequencing becomes an indispensible tool for biomedical research, it is crucial to provide analysis solutions that are usable and cost effective for biomedical researchers. Galaxy Cloud addresses this by combining the accessible Galaxy interface with automated management of cloud computing resources. Unlike purpose-built solutions, Galaxy allows users either to use existing tested best practices in the form of workflows or to construct their own analyses for novel tasks. Galaxy Cloud instances are owned and controlled entirely by the user who created them and can be used effectively in secure private clouds. Thus, Galaxy Cloud provides a solution that retains user control and privacy, makes complex analysis accessible and enables the use of practically limitless on-demand computing resources."
]
} |
1612.01175 | 2560404968 | Recognizing when people have false beliefs is crucial for understanding their actions. We introduce the novel problem of identifying when people in abstract scenes have incorrect beliefs. We present a dataset of scenes, each visually depicting an 8-frame story in which a character has a mistaken belief. We then create a representation of characters' beliefs for two tasks in human action understanding: predicting who is mistaken, and when they are mistaken. Experiments suggest that our method for identifying mistaken characters performs better on these tasks than simple baselines. Diagnostics on our model suggest it learns important cues for recognizing mistaken beliefs, such as gaze. We believe models of people's beliefs will have many applications in action understanding, robotics, and healthcare. | Our work complements efforts to learn common sense. @cite_17 extract common sense from object detection corpora, while @cite_36 learn visual common sense by browsing the Internet. @cite_7 use abstract images to learn how people, animals and objects are likely to interact. Recent work @cite_19 @cite_0 @cite_12 has learned physical common sense given videos of colliding objects. Finally, @cite_6 explore understanding social interactions in crowded spaces, and @cite_25 study causality in unconstrained video to understand social games. In this work, we study the subset of common sense related to visual beliefs. | {
"cite_N": [
"@cite_7",
"@cite_36",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2196779496",
"1964763677",
"2424778531",
"",
"2951384764",
"1975723154",
"",
"2462848072"
],
"abstract": [
"Common sense is essential for building intelligent machines. While some commonsense knowledge is explicitly stated in human-generated text and can be learnt by mining the web, much of it is unwritten. It is often unnecessary and even unnatural to write about commonsense facts. While unwritten, this commonsense knowledge is not unseen! The visual world around us is full of structure modeled by commonsense knowledge. Can machines learn common sense simply by observing our visual world? Unfortunately, this requires automatic and accurate detection of objects, their attributes, poses, and interactions between objects, which remain challenging problems. Our key insight is that while visual common sense is depicted in visual content, it is the semantic features that are relevant and not low-level pixel information. In other words, photorealism is not necessary to learn common sense. We explore the use of human-generated abstract scenes made from clipart for learning common sense. In particular, we reason about the plausibility of an interaction or relation between a pair of nouns by measuring the similarity of the relation and nouns with other relations and nouns we have seen in abstract scenes. We show that the commonsense knowledge we learn is complementary to what can be learnt from sources of text.",
"We propose NEIL (Never Ending Image Learner), a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. NEIL uses a semi-supervised learning algorithm that jointly discovers common sense relationships (e.g., \"Corolla is a kind of looks similar to Car\", \"Wheel is a part of Car\") and labels instances of the given visual categories. It is an attempt to develop the world's largest visual structured knowledge base with minimum human labeling effort. As of 10th October 2013, NEIL has been continuously running for 2.5 months on 200 core cluster (more than 350K CPU hours) and has an ontology of 1152 object categories, 1034 scene categories and 87 attributes. During this period, NEIL has discovered more than 1700 relationships and has labeled more than 400K visual instances.",
"Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model.",
"",
"Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world. In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics. Using a 3D game engine, we create small towers of wooden blocks whose stability is randomized and render them collapsing (or remaining upright). This data allows us to train large convolutional network models which can accurately predict the outcome, as well as estimating the block trajectories. The models are also able to generalize in two important ways: (i) to new physical scenarios, e.g. towers with an additional block and (ii) to images of real wooden blocks, where it obtains a performance comparable to human subjects.",
"We present a novel approach to the causal temporal analysis of event data from video content. Our key observation is that the sequence of visual words produced by a space-time dictionary representation of a video sequence can be interpreted as a multivariate point-process. By using a spectral version of the pairwise test for Granger causality, we can identify patterns of interactions between words and group them into independent causal sets. We demonstrate qualitatively that this produces semantically-meaningful groupings, and we demonstrate quantitatively that these groupings lead to improved performance in retrieving and classifying social games from unstructured videos.",
"",
"Obtaining common sense knowledge using current information extraction techniques is extremely challenging. In this work, we instead propose to derive simple common sense statements from fully annotated object detection corpora such as the Microsoft Common Objects in Context dataset. We show that many thousands of common sense facts can be extracted from such corpora at high quality. Furthermore, using WordNet and a novel submodular k-coverage formulation, we are able to generalize our initial set of common sense assertions to unseen objects and uncover over 400k potentially useful facts."
]
} |
1612.01175 | 2560404968 | Recognizing when people have false beliefs is crucial for understanding their actions. We introduce the novel problem of identifying when people in abstract scenes have incorrect beliefs. We present a dataset of scenes, each visually depicting an 8-frame story in which a character has a mistaken belief. We then create a representation of characters' beliefs for two tasks in human action understanding: predicting who is mistaken, and when they are mistaken. Experiments suggest that our method for identifying mistaken characters performs better on these tasks than simple baselines. Diagnostics on our model suggest it learns important cues for recognizing mistaken beliefs, such as gaze. We believe models of people's beliefs will have many applications in action understanding, robotics, and healthcare. | Our work is related to activity understanding in vision @cite_35 @cite_27 @cite_26 @cite_13 @cite_34 . Systems for understanding human actions typically leverage a variety of cues, such as context, pose, or gaze @cite_33 . Our work complements action understanding in two ways. First, we study visual beliefs, which may be a useful signal for better understanding people's activities. Second, recognizing visual beliefs often requires an understanding of people's actions. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_33",
"@cite_27",
"@cite_34",
"@cite_13"
],
"mid": [
"1927052826",
"2214124602",
"2184540135",
"",
"2212494831",
""
],
"abstract": [
"In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new large-scale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.",
"We introduce a new benchmark \"Humans Interacting with Common Objects\" (HICO) for recognizing human-object interactions (HOI). We demonstrate the key features of HICO: a diverse set of interactions with common object categories, a list of well-defined, sense-based HOI categories, and an exhaustive labeling of co-occurring interactions with an object category in each image. We perform an in-depth analysis of representative current approaches and show that DNNs enjoy a significant edge. In addition, we show that semantic knowledge can significantly improve HOI recognition, especially for uncommon categories.",
"Humans have the remarkable ability to follow the gaze of other people to identify what they are looking at. Following eye gaze, or gaze-following, is an important ability that allows us to understand what other people are thinking, the actions they are performing, and even predict what they might do next. Despite the importance of this topic, this problem has only been studied in limited scenarios within the computer vision community. In this paper, we propose a deep neural network-based approach for gaze-following and a new benchmark dataset, GazeFollow, for thorough evaluation. Given an image and the location of a head, our approach follows the gaze of the person and identifies the object being looked at. Our deep network is able to discover how to extract head pose and gaze orientation, and to select objects in the scene that are in the predicted line of sight and likely to be looked at (such as televisions, balls and food). The quantitative evaluation shows that our approach produces reliable results, even when viewing only the back of the head. While our method outperforms several baseline approaches, we are still far from reaching human performance on this task. Overall, we believe that gaze-following is a challenging and important problem that deserves more attention from the community.",
"",
"We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze.",
""
]
} |
1612.01175 | 2560404968 | Recognizing when people have false beliefs is crucial for understanding their actions. We introduce the novel problem of identifying when people in abstract scenes have incorrect beliefs. We present a dataset of scenes, each visually depicting an 8-frame story in which a character has a mistaken belief. We then create a representation of characters' beliefs for two tasks in human action understanding: predicting who is mistaken, and when they are mistaken. Experiments suggest that our method for identifying mistaken characters performs better on these tasks than simple baselines. Diagnostics on our model suggest it learns important cues for recognizing mistaken beliefs, such as gaze. We believe models of people's beliefs will have many applications in action understanding, robotics, and healthcare. | We take advantage of abstract images pioneered by @cite_31 , which have received wide interest in computer vision for studying high-level vision tasks. @cite_30 use abstract images to detect visual humor. @cite_16 explore binary question-answering in abstract scenes, and @cite_8 learn to predict object dynamics in clip art. While these approaches reason about image-level features and semantics, our approach looks at character-level features. Importantly, two characters in the same scene can have different beliefs about the world, so each character should have a different character-level feature. Additionally, we extend this previous work to multi-frame scenes depicting visual stories. | {
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_16",
"@cite_8"
],
"mid": [
"2269406410",
"1996418862",
"2273038706",
"1982185844"
],
"abstract": [
"Humor is an integral part of human lives. Despite being tremendously impactful, it is perhaps surprising that we do not have a detailed understanding of humor yet. As interactions between humans and AI systems increase, it is imperative that these systems are taught to understand subtleties of human expressions such as humor. In this work, we are interested in the question - what content in a scene causes it to be funny? As a first step towards understanding visual humor, we analyze the humor manifested in abstract scenes and design computational models for them. We collect two datasets of abstract scenes that facilitate the study of humor at both the scene-level and the object-level. We analyze the funny scenes and explore the different types of humor depicted in them via human studies. We model two tasks that we believe demonstrate an understanding of some aspects of visual humor. The tasks involve predicting the funniness of a scene and altering the funniness of a scene. We show that our models perform well quantitatively, and qualitatively through human studies. Our datasets are publicly available.",
"Relating visual information to its linguistic semantic meaning remains an open and challenging area of research. The semantic meaning of images depends on the presence of objects, their attributes and their relations to other objects. But precisely characterizing this dependence requires extracting complex visual information from an image, which is in general a difficult and yet unsolved problem. In this paper, we propose studying semantic information in abstract images created from collections of clip art. Abstract images provide several advantages. They allow for the direct study of how to infer high-level semantic information, since they remove the reliance on noisy low-level object, attribute and relation detectors, or the tedious hand-labeling of images. Importantly, abstract images also allow the ability to generate sets of semantically similar scenes. Finding analogous sets of semantically similar real images would be nearly impossible. We create 1,002 sets of 10 semantically similar abstract scenes with corresponding written descriptions. We thoroughly analyze this dataset to discover semantically important features, the relations of words to visual features and methods for measuring semantic similarity.",
"The complex compositional structure of language makes problems at the intersection of vision and language challenging. But language also provides a strong prior that can result in good superficial performance, without the underlying models truly understanding the visual content. This can hinder progress in pushing state of art in the computer vision aspects of multi-modal AI. In this paper, we address binary Visual Question Answering (VQA) on abstract scenes. We formulate this problem as visual verification of concepts inquired in the questions. Specifically, we convert the question to a tuple that concisely summarizes the visual concept to be detected in the image. If the concept can be found in the image, the answer to the question is \"yes\", and otherwise \"no\". Abstract scenes play two roles (1) They allow us to focus on the high-level semantics of the VQA task as opposed to the low-level recognition problems, and perhaps more importantly, (2) They provide us the modality to balance the dataset such that language priors are controlled, and the role of vision is essential. In particular, we collect fine-grained pairs of scenes for every question, such that the answer to the question is \"yes\" for one scene, and \"no\" for the other for the exact same question. Indeed, language priors alone do not perform better than chance on our balanced dataset. Moreover, our proposed approach matches the performance of a state-of-the-art VQA approach on the unbalanced dataset, and outperforms it on the balanced dataset.",
"Given a static scene, a human can trivially enumerate the myriad of things that can happen next and characterize the relative likelihood of each. In the process, we make use of enormous amounts of commonsense knowledge about how the world works. In this paper, we investigate learning this commonsense knowledge from data. To overcome a lack of densely annotated spatiotemporal data, we learn from sequences of abstract images gathered using crowdsourcing. The abstract scenes provide both object location and attribute information. We demonstrate qualitatively and quantitatively that our models produce plausible scene predictions on both the abstract images, as well as natural images taken from the Internet."
]
} |
1612.01175 | 2560404968 | Recognizing when people have false beliefs is crucial for understanding their actions. We introduce the novel problem of identifying when people in abstract scenes have incorrect beliefs. We present a dataset of scenes, each visually depicting an 8-frame story in which a character has a mistaken belief. We then create a representation of characters' beliefs for two tasks in human action understanding: predicting who is mistaken, and when they are mistaken. Experiments suggest that our method for identifying mistaken characters performs better on these tasks than simple baselines. Diagnostics on our model suggest it learns important cues for recognizing mistaken beliefs, such as gaze. We believe models of people's beliefs will have many applications in action understanding, robotics, and healthcare. | After we learn to recognize mistaken characters in abstract scenes, one could use domain adaptation @cite_8 @cite_5 to apply our approach to natural images. However, this is orthogonal to the goal of this paper. Additionally, @cite_14 and @cite_23 show how to perform unsupervised domain adaptation, which is relevant to our setting because annotating natural videos is costly. | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_23",
"@cite_8"
],
"mid": [
"",
"1882958252",
"2953226914",
"1982185844"
],
"abstract": [
"",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"Given a static scene, a human can trivially enumerate the myriad of things that can happen next and characterize the relative likelihood of each. In the process, we make use of enormous amounts of commonsense knowledge about how the world works. In this paper, we investigate learning this commonsense knowledge from data. To overcome a lack of densely annotated spatiotemporal data, we learn from sequences of abstract images gathered using crowdsourcing. The abstract scenes provide both object location and attribute information. We demonstrate qualitatively and quantitatively that our models produce plausible scene predictions on both the abstract images, as well as natural images taken from the Internet."
]
} |
1612.01198 | 2950569797 | The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker for programs that run on GPUs. We study a set of GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation, using cheap static analysis to speculate potential program invariants. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), worth (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). The candidates produced by GPUVerify help to verify 231 of the 253 programs. This increase in precision, however, makes GPUVerify sluggish: the more candidates that are generated, the more time is spent determining which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. | Invariant generation has been a long-standing challenge in computer science that has received a lot of attention from researchers, e.g. @cite_4 @cite_28 @cite_55 @cite_15 @cite_47 @cite_40 @cite_60 @cite_57 @cite_37 @cite_5 @cite_41 (by no means an exhaustive list). We discuss the work most closely related to our study. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_15",
"@cite_28",
"@cite_60",
"@cite_55",
"@cite_41",
"@cite_57",
"@cite_40",
"@cite_5",
"@cite_47"
],
"mid": [
"1970168990",
"1925669739",
"1498946538",
"",
"2158735282",
"2043100293",
"2103318645",
"1565898282",
"1518247129",
"2095810701",
"2123736383"
],
"abstract": [
"This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools.",
"In this paper we present InvGen , an automatic linear arithmetic invariant generator for imperative programs. InvGen 's unique feature is in its use of dynamic analysis to make invariant generation order of magnitude more efficient.",
"A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC Java. To infer suitable ESC Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.",
"",
"Abstract interpretation provides an elegant formalism for performing program analysis. Unfortunately, designing and implementing a sound, precise, scalable, and extensible abstract interpreter is difficult. In this paper, we describe an approach to creating correct-by-construction abstract interpreters that also attain the fundamental limits on precision that abstract-interpretation theory establishes. Our approach requires the analysis designer to implement only a small number of operations. In particular, we describe a systematic method for implementing an abstract interpreter that solves the following problem:Given program P, and an abstract domain A, find the most-precise inductive A-invariant for P.",
"A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).",
"Explicitly stated program invariants can help programmers by identifying program properties that must be preserved when modifying code. In practice, however, these invariants are usually implicit. An alternative to expecting programmers to fully annotate code with invariants is to automatically infer likely invariants from the program itself. This research focuses on dynamic techniques for discovering invariants from execution traces. This article reports three results. First, it describes techniques for dynamically discovering invariants, along with an implementation, named Daikon, that embodies these techniques. Second, it reports on the application of Daikon to two sets of target programs. In programs from Gries's work (1981) on program derivation, the system rediscovered predefined invariants. In a C program lacking explicit invariants, the system discovered invariants that assisted a software evolution task. These experiments demonstrate that, at least for small programs, invariant inference is both accurate and useful. Third, it analyzes scalability issues, such as invariant detection runtime and accuracy, as functions of test suites and program points instrumented.",
"We describe a model checker for infinite-state sequential programs, based on Craig interpolation and the lazy abstraction paradigm. On device driver benchmarks, we observe a speedup of up to two orders of magnitude relative to a similar tool using predicate abstraction.",
"This article describes Apron , a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.",
"We present abstract acceleration techniques for computing loop invariants for numerical programs with linear assignments and conditionals. Whereas abstract interpretation techniques typically over-approximate the set of reachable states iteratively, abstract acceleration captures the effect of the loop with a single, non-iterative transfer function applied to the initial states at the loop head. In contrast to previous acceleration techniques, our approach applies to any linear loop without restrictions. Its novelty lies in the use of the Jordan normal form decomposition of the loop body to derive symbolic expressions for the entries of the matrix modeling the effect of η ≥ Ο iterations of the loop. The entries of such a matrix depend on η through complex polynomial, exponential and trigonometric functions. Therefore, we introduces an abstract domain for matrices that captures the linear inequality relations between these complex expressions. This results in an abstract matrix for describing the fixpoint semantics of the loop. Our approach integrates smoothly into standard abstract interpreters and can handle programs with nested loops and loops containing conditional branches. We evaluate it over small but complex loops that are commonly found in control software, comparing it with other tools for computing linear loop invariants. The loops in our benchmarks typically exhibit polynomial, exponential and oscillatory behaviors that present challenges to existing approaches. Our approach finds non-trivial invariants to prove useful bounds on the values of variables for such loops, clearly outperforming the existing approaches in terms of precision while exhibiting good performance.",
"Because general algorithms for sequential equivalence checking require a state space traversal of the product machine, they are computationally expensive. In this paper, we present a new method for sequential equivalence checking which utilizes functionally equivalent signals to prove the equivalence of both circuits, thereby avoiding the state space traversal. The effectiveness of the proposed method is confirmed by experimental results on retimed and optimized ISCAS'89 benchmarks."
]
} |
1612.01198 | 2950569797 | The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker for programs that run on GPUs. We study a set of GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation, using cheap static analysis to speculate potential program invariants. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), worth (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). The candidates produced by GPUVerify help to verify 231 of the 253 programs. This increase in precision, however, makes GPUVerify sluggish: the more candidates that are generated, the more time is spent determining which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. | Houdini was proposed as an annotation assistant for the ESC Java tool @cite_15 , and is formally presented in @cite_27 . The method is analogous to an invariant strengthening technique for circuit equivalence checking @cite_47 ; we believe the methods were discovered independently. Houdini can be viewed as a special instance of predicate abstraction @cite_12 , restricted to conjunctions of predicates. This restriction is what makes the runtime of Houdini predictable, involving a worst case number of solver calls proportional to the number of candidates. The restriction also makes it impossible to synthesize disjunctive invariants over predicates using Houdini. A recent compelling application of Houdini is in the Corral reachability checker, where Houdini is used to generate procedure summaries which in turn are used to guide the search for bugs @cite_11 . | {
"cite_N": [
"@cite_47",
"@cite_27",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"2123736383",
"",
"1498946538",
"1497571013",
"1507006488"
],
"abstract": [
"Because general algorithms for sequential equivalence checking require a state space traversal of the product machine, they are computationally expensive. In this paper, we present a new method for sequential equivalence checking which utilizes functionally equivalent signals to prove the equivalence of both circuits, thereby avoiding the state space traversal. The effectiveness of the proposed method is confirmed by experimental results on retimed and optimized ISCAS'89 benchmarks.",
"",
"A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC Java. To infer suitable ESC Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.",
"In this paper, we propose a method for the automatic construction of an abstract state graph of an arbitrary system using the Pvs theorem prover.",
"Consider a sequential programming language with control flow constructs such as assignments, choice, loops, and procedure calls. We restrict the syntax of expressions in this language to one that can be efficiently decided by a satisfiability-modulo-theories solver. For such a language, we define the problem of deciding whether a program can reach a particular control location as the reachability-modulo-theories problem. This paper describes the architecture of Corral, a semi-algorithm for the reachability-modulo-theories problem. Corraluses novel algorithms for inlining procedures on demand (Stratified Inlining) and abstraction refinement (Hierarchical Refinement). The paper also presents an evaluation of Corralagainst other related tools. Corralconsistently outperforms its competitors on most benchmarks."
]
} |
1612.01198 | 2950569797 | The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker for programs that run on GPUs. We study a set of GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation, using cheap static analysis to speculate potential program invariants. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), worth (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). The candidates produced by GPUVerify help to verify 231 of the 253 programs. This increase in precision, however, makes GPUVerify sluggish: the more candidates that are generated, the more time is spent determining which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. | Abstract interpretation @cite_55 is a general program analysis framework that can be parameterized to generate inductive invariants over a given abstract domain. For instance, the Interproc analyzer synthesizes invariants over the abstract domain of linear inequalities, using the Apron library @cite_40 . Predicate abstraction is abstract interpretation over the domain of Boolean combinations of predicates @cite_6 , and Houdini is thus a form of abstract interpretation where the domain is restricted to conjunctions of predicates. The main disadvantages of abstract interpretation are that it is inflexible, in the sense that generation of invariants beyond a given abstract domain requires a bespoke new domain to be crafted, and that to ensure convergence to a fixpoint it is necessary to apply which can be hard to control in a predictable manner. In contrast, a Houdini-based approach can easily be tweaked'' by adding new candidate generation rules on an example-driven basis, as we have demonstrated in this paper. Convergence to a fixpoint is also predictable based on the known set of candidates. In recent work, has been proposed in an attempt to combine the benefits of abstract interpretation and candidate-based invariant generation @cite_60 . | {
"cite_N": [
"@cite_40",
"@cite_55",
"@cite_6",
"@cite_60"
],
"mid": [
"1518247129",
"2043100293",
"1526056075",
"2158735282"
],
"abstract": [
"This article describes Apron , a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.",
"A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe (+), (-), (±) where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).",
"",
"Abstract interpretation provides an elegant formalism for performing program analysis. Unfortunately, designing and implementing a sound, precise, scalable, and extensible abstract interpreter is difficult. In this paper, we describe an approach to creating correct-by-construction abstract interpreters that also attain the fundamental limits on precision that abstract-interpretation theory establishes. Our approach requires the analysis designer to implement only a small number of operations. In particular, we describe a systematic method for implementing an abstract interpreter that solves the following problem:Given program P, and an abstract domain A, find the most-precise inductive A-invariant for P."
]
} |
1612.01198 | 2950569797 | The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker for programs that run on GPUs. We study a set of GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation, using cheap static analysis to speculate potential program invariants. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), worth (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). The candidates produced by GPUVerify help to verify 231 of the 253 programs. This increase in precision, however, makes GPUVerify sluggish: the more candidates that are generated, the more time is spent determining which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. | There has been significant progress recently on invariant generation for a restricted class of programs that operate on unbounded integers and only compute affine expressions over program variables. Under these restrictions, novel applications of Craig interpolation @cite_57 , abduction @cite_37 and abstract acceleration @cite_5 have been shown to be effective in invariant synthesis. The weakness of these methods are the restrictions on input programs. In our application domain, for example, programs operate on fixed-width bit-vectors and floating point numbers. It is necessary to reason precisely about bit-vectors to capture arithmetic using powers-of-two, frequently encoded efficiently using shifting and masking, and we require support for uninterpreted functions to abstract floating point operators but retain their functional properties. Furthermore, GPU kernels frequently exhibit non-linear computations. For example, reduction operations involve loops in which a counter exponentially varies in powers of two between an upper and lower bound. These characteristics render methods for affine programs inapplicable in our setting. | {
"cite_N": [
"@cite_57",
"@cite_5",
"@cite_37"
],
"mid": [
"1565898282",
"2095810701",
"1970168990"
],
"abstract": [
"We describe a model checker for infinite-state sequential programs, based on Craig interpolation and the lazy abstraction paradigm. On device driver benchmarks, we observe a speedup of up to two orders of magnitude relative to a similar tool using predicate abstraction.",
"We present abstract acceleration techniques for computing loop invariants for numerical programs with linear assignments and conditionals. Whereas abstract interpretation techniques typically over-approximate the set of reachable states iteratively, abstract acceleration captures the effect of the loop with a single, non-iterative transfer function applied to the initial states at the loop head. In contrast to previous acceleration techniques, our approach applies to any linear loop without restrictions. Its novelty lies in the use of the Jordan normal form decomposition of the loop body to derive symbolic expressions for the entries of the matrix modeling the effect of η ≥ Ο iterations of the loop. The entries of such a matrix depend on η through complex polynomial, exponential and trigonometric functions. Therefore, we introduces an abstract domain for matrices that captures the linear inequality relations between these complex expressions. This results in an abstract matrix for describing the fixpoint semantics of the loop. Our approach integrates smoothly into standard abstract interpreters and can handle programs with nested loops and loops containing conditional branches. We evaluate it over small but complex loops that are commonly found in control software, comparing it with other tools for computing linear loop invariants. The loops in our benchmarks typically exhibit polynomial, exponential and oscillatory behaviors that present challenges to existing approaches. Our approach finds non-trivial invariants to prove useful bounds on the values of variables for such loops, clearly outperforming the existing approaches in terms of precision while exhibiting good performance.",
"This paper presents a new method for generating inductive loop invariants that are expressible as boolean combinations of linear integer constraints. The key idea underlying our technique is to perform a backtracking search that combines Hoare-style verification condition generation with a logical abduction procedure based on quantifier elimination to speculate candidate invariants. Starting with true, our method iteratively strengthens loop invariants until they are inductive and strong enough to verify the program. A key feature of our technique is that it is lazy: It only infers those invariants that are necessary for verifying program correctness. Furthermore, our technique can infer arbitrary boolean combinations (including disjunctions) of linear invariants. We have implemented the proposed approach in a tool called HOLA. Our experiments demonstrate that HOLA can infer interesting invariants that are beyond the reach of existing state-of-the-art invariant generation tools."
]
} |
1612.01198 | 2950569797 | The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker for programs that run on GPUs. We study a set of GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation, using cheap static analysis to speculate potential program invariants. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), worth (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). The candidates produced by GPUVerify help to verify 231 of the 253 programs. This increase in precision, however, makes GPUVerify sluggish: the more candidates that are generated, the more time is spent determining which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. | The techniques discussed above all use static analysis to establish program invariants with certainty. In contrast, dynamic invariant generation, pioneered by the Daikon system @cite_41 employs dynamic analysis with respect to a test suite to speculate invariants: facts that are found to hold invariantly during testing, with statistical evidence that the dynamic invariance of these facts appears to be non-coincidental. This method provides no guarantee that the suggested facts are actually invariants. A study combining the Daikon method with extended static checking for Java considered the use of dynamically generated invariants as a source of candidates for Houdini @cite_17 . | {
"cite_N": [
"@cite_41",
"@cite_17"
],
"mid": [
"2103318645",
"1992371286"
],
"abstract": [
"Explicitly stated program invariants can help programmers by identifying program properties that must be preserved when modifying code. In practice, however, these invariants are usually implicit. An alternative to expecting programmers to fully annotate code with invariants is to automatically infer likely invariants from the program itself. This research focuses on dynamic techniques for discovering invariants from execution traces. This article reports three results. First, it describes techniques for dynamically discovering invariants, along with an implementation, named Daikon, that embodies these techniques. Second, it reports on the application of Daikon to two sets of target programs. In programs from Gries's work (1981) on program derivation, the system rediscovered predefined invariants. In a C program lacking explicit invariants, the system discovered invariants that assisted a software evolution task. These experiments demonstrate that, at least for small programs, invariant inference is both accurate and useful. Third, it analyzes scalability issues, such as invariant detection runtime and accuracy, as functions of test suites and program points instrumented.",
"Static checking can verify the absence of errors in a program, but often requires written annotations or specifications. As a result, static checking can be difficult to use effectively: it can be difficult to determine a specification and tedious to annotate programs. Automated tools that aid the annotation process can decrease the cost of static checking and enable it to be more widely used.This paper describes an evaluation of the effectiveness of two techniques, one static and one dynamic, to assist the annotation process. We quantitatively and qualitatively evaluate 41 programmers using ESC Java in a program verification task over three small programs, using Houdini for static inference and Daikon for dynamic inference. We also investigate the effect of unsoundness in the dynamic analysis.Statistically significant results show that both inference tools improve task completion; Daikon enables users to express more correct invariants; unsoundness of the dynamic analysis is little hindrance to users; and users imperfectly exploit Houdini. Interviews indicate that beginning users found Daikon to be helpful; Houdini to be neutral; static checking to be of potential practical use; and both assistance tools to have unique benefits.Our observations not only provide a critical evaluation of these two techniques, but also highlight important considerations for creating future assistance tools."
]
} |
1612.01198 | 2950569797 | The discovery of inductive invariants lies at the heart of static program verification. Presently, many automatic solutions to inductive invariant generation are inflexible, only applicable to certain classes of programs, or unpredictable. An automatic technique that circumvents these deficiencies to some extent is candidate-based invariant generation. This paper describes our efforts to apply candidate-based invariant generation in GPUVerify, a static checker for programs that run on GPUs. We study a set of GPU programs that contain loops, drawn from a number of open source suites and vendor SDKs. We describe the methodology we used to incrementally improve the invariant generation capabilities of GPUVerify to handle these benchmarks, through candidate-based invariant generation, using cheap static analysis to speculate potential program invariants. We also describe a set of experiments that we used to examine the effectiveness of our rules for candidate generation, assessing rules based on their generality (the extent to which they generate candidate invariants), hit rate (the extent to which the generated candidates hold), worth (the extent to which provable candidates actually help in allowing verification to succeed), and influence (the extent to which the success of one generation rule depends on candidates generated by another rule). The candidates produced by GPUVerify help to verify 231 of the 253 programs. This increase in precision, however, makes GPUVerify sluggish: the more candidates that are generated, the more time is spent determining which are inductive invariants. To speed up this process, we have investigated four under-approximating program analyses that aim to reject false candidates quickly and a framework whereby these analyses can run in sequence or in parallel. | A related study on invariant generation @cite_43 aimed to evaluate whether it is better to rely on manual effort, automated techniques or a combination of both in generating program invariants. The study concludes that a combination is required: Daikon inferred @math times as many invariants as specified manually, but could only find approximately @math 25$ classes taken partially from widely used libraries and partially written by students. The size of the benchmark set allowed the authors to investigate each inferred assertion individually; this is not feasible in our study due to the substantially larger number of benchmarks. | {
"cite_N": [
"@cite_43"
],
"mid": [
"2114178685"
],
"abstract": [
"Where do contracts - specification elements embedded in executable code - come from? To produce them, should we rely on the programmers, on automatic tools, or some combination? Recent work, in particular the Daikon system, has shown that it is possible to infer some contracts automatically from program executions. The main incentive has been an assumption that most programmers are reluctant to invent the contracts themselves. The experience of contract-supporting languages, notably Eiffel, disproves that assumption: programmers will include contracts if given the right tools. That experience also shows, however, that the resulting contracts are generally partial and occasionally incorrect. Contract inference tools provide the opportunity for studying objectively the quality of programmer-written contracts, and for assessing the respective roles of humans and tools. Working on 25 classes taken from different sources such as widely-used standard libraries and code written by students, we applied Daikon to infer contracts and compared the results (totaling more than 19500 inferred assertion clauses) with the already present contracts. We found that a contract inference tool can be used to strengthen programmer-written contracts, but cannot infer all contracts that humans write. The tool generates around five times as many relevant assertion clauses as written by programmers; but it only finds around 60 of those originally written by programmers. Around a third of the generated assertions clauses are either incorrect or irrelevant. The study also uncovered interesting correlations between the quality of inferred contracts and some code metrics."
]
} |
1612.01341 | 2559853244 | Existing person re-identification models are poor for scaling up to large data required in real-world applications due to: (1) Complexity: They employ complex models for optimal performance resulting in high computational cost for training at a large scale; (2) Inadaptability: Once trained, they are unsuitable for incremental update to incorporate any new data available. This work proposes a truly scalable solution to re-id by addressing both problems. Specifically, a Highly Efficient Regression (HER) model is formulated by embedding the Fisher's criterion to a ridge regression model for very fast re-id model learning with scalable memory storage usage. Importantly, this new HER model supports faster than real-time incremental model updates therefore making real-time active learning feasible in re-id with human-in-the-loop. Extensive experiments show that such a simple and fast model not only outperforms notably the state-of-the-art re-id methods, but also is more scalable to large data with additional benefits to active learning for reducing human labelling effort in re-id deployment. | Ridge regression @cite_11 @cite_41 , as a regularised least squares model, is one of the most well-studied machine learning models. It has a simple closed-form solution solved by a linear system, and thus low model complexity. Furthermore, many well-optimised algorithms @cite_48 can be readily applied to large data. Finally, its adaptable solution supports efficient model update for incremental learning @cite_42 . The new HER model casts re-id into such a regression problem, benefiting from all of its advantages in scalability. To explore ridge regression for discriminative re-id verification tasks, the proposed HER model is further embedded with the criterion of Fisher Discriminant Analysis (FDA) @cite_56 @cite_58 to encode person identity-discriminative information. The relationship between FDA and linear regression has been studied for binary @cite_49 and multi-class @cite_41 @cite_53 classification tasks. Recently, similar connections have been discovered for their regularised counterparts @cite_2 @cite_55 @cite_7 . However, this work is the first to formulate it for a verification setting as in re-id. For its incremental extension, our model HER @math , differs significantly to @cite_42 which only supports updates on a single sample without regularisation employed. | {
"cite_N": [
"@cite_7",
"@cite_41",
"@cite_48",
"@cite_53",
"@cite_42",
"@cite_55",
"@cite_56",
"@cite_49",
"@cite_2",
"@cite_58",
"@cite_11"
],
"mid": [
"862919699",
"1554944419",
"2097897435",
"1978286881",
"2131459196",
"",
"2001619934",
"",
"",
"2135346934",
"2047028564"
],
"abstract": [
"Linear discriminant analysis (LDA) is a popular dimensionality reduction and classification method that simultaneously maximizes between-class scatter and minimizes within-class scatter. In this paper, we verify the equivalence of LDA and least squares (LS) with a set of dependent variable matrices. The equivalence is in the sense that the LDA solution matrix and the LS solution matrix have the same range. The resulting LS provides an intuitive interpretation in which its solution performs data clustering according to class labels. Further, the fact that LDA and LS have the same range allows us to design a two-stage algorithm that computes the LDA solution given by generalized eigenvalue decomposition (GEVD), much faster than computing the original GEVD. Experimental results demonstrate the equivalence of the LDA solution and the proposed LS solution.",
"During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. @PARASPLIT This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for wide'' data (p bigger than n), including multiple testing and false discovery rates. @PARASPLIT Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.",
"An iterative method is given for solving Ax ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I QR with several other conjugate-gradient algorithms, indicating that I QR is the most reliable algorithm when A is ill-conditioned.",
"In this paper, a relationship between linear discriminant analysis (LDA) and the generalized minimum squared error (MSE) solution is presented. The generalized MSE solution is shown to be equivalent to applying a certain classification rule in the space defined by LDA. The relationship between the MSE solution and Fisher discriminant analysis is extended to multiclass problems and also to undersampled problems for which the classical LDA is not applicable due to singularity of the scatter matrices. In addition, an efficient algorithm for LDA is proposed exploiting its relationship with the MSE procedure. Extensive experiments verify the theoretical results.",
"Linear discriminant analysis (LDA) is a well-known dimension reduction approach, which projects high-dimensional data into a low-dimensional space with the best separation of different classes. In many tasks, the data accumulates over time, and thus incremental LDA is more desirable than batch LDA. Several incremental LDA algorithms have been developed and achieved success; however, the eigen-problem involved requires a large computation cost, which hampers the efficiency of these algorithms. In this paper, we propose a new incremental LDA algorithm, LS-ILDA, based on the least square solution of LDA. When new samples are received, LS-ILDA incrementally updates the least square solution of LDA. Our analysis discloses that this algorithm produces the exact least square solution of batch LDA, while its computational cost is O(min(n; d) £ d) for one update on dataset containing n instances in d-dimensional space. Experimental results show that comparing with state-of-the-art incremental LDA algorithms, our proposed LS-ILDA achieves high accuracy with low time cost.",
"",
"",
"",
"",
"This completely revised second edition presents an introduction to statistical pattern recognition. Pattern recognition in general covers a wide range of problems: it is applied to engineering problems, such as character readers and wave form analysis as well as to brain modeling in biology and psychology. Statistical decision and estimation, which are the main subjects of this book, are regarded as fundamental to the study of pattern recognition. This book is appropriate as a text for introductory courses in pattern recognition and as a reference book for workers in the field. Each chapter contains computer projects as well as exercises.",
"In multiple regression it is shown that parameter estimates based on minimum residual sum of squares have a high probability of being unsatisfactory, if not incorrect, if the prediction vectors are not orthogonal. Proposed is an estimation procedure based on adding small positive quantities to the diagonal of X′X. Introduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X′X to obtain biased estimates with smaller mean square error."
]
} |
1612.01380 | 2952781046 | While machine learning approaches to image restoration offer great promise, current methods risk training models fixated on performing well only for image corruption of a particular level of difficulty---such as a certain level of noise or blur. First, we examine the weakness of conventional "fixated" models and demonstrate that training general models to handle arbitrary levels of corruption is indeed non-trivial. Then, we propose an on-demand learning algorithm for training image restoration models with deep convolutional neural networks. The main idea is to exploit a feedback mechanism to self-generate training instances where they are needed most, thereby learning models that can generalize across difficulty levels. On four restoration tasks---image inpainting, pixel interpolation, image deblurring, and image denoising---and three diverse datasets, our approach consistently outperforms both the status quo training procedure and curriculum learning alternatives. | Deep learning for image restoration is on the rise. Vincent al @cite_17 propose one of the most well-known models: the stacked denoising auto-encoder. A multi-layer perceptron (MLP) is applied to image denoising by Burger al @cite_18 and post-deblurring denoising by Schuler al @cite_21 . Convolutional neural networks are also applied to natural image denoising @cite_4 and used to remove noisy patterns (e.g., dirt rain) @cite_31 . Apart from denoising, deep learning is gaining traction for various other low-level tasks: super-resolution @cite_43 @cite_23 , inpainting @cite_40 @cite_45 , deconvolution @cite_20 , matting @cite_10 , and colorization @cite_2 @cite_16 . While many models specialize the architecture towards one restoration task, recent work by Liu al presents a unified network for multiple tasks @cite_13 . Our encoder-decoder pipeline also applies across tasks, and serves as a good testbed for our main contribution---the idea of on-demand learning. Our idea has the potential to benefit any existing method currently limited to training with a narrow band of difficulty @cite_40 @cite_45 @cite_4 @cite_18 @cite_21 @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_4",
"@cite_21",
"@cite_43",
"@cite_40",
"@cite_45",
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2037642501",
"2098477387",
"1973567017",
"",
"2342877626",
"2479644247",
"2950689937",
"2308529009",
"2154815154",
"2326925005",
"2520247582",
"2124964692",
"2025768430"
],
"abstract": [
"",
"Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well.",
"We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.",
"Image deconvolution is the ill-posed problem of recovering a sharp image, given a blurry one generated by a convolution. In this work, we deal with space-invariant non-blind deconvolution. Currently, the most successful methods involve a regularized inversion of the blur in Fourier domain as a first step. This step amplifies and colors the noise, and corrupts the image information. In a second (and arguably more difficult) step, one then needs to remove the colored noise, typically using a cleverly engineered algorithm. However, the methods based on this two-step approach do not properly address the fact that the image information has been corrupted. In this work, we also rely on a two-step procedure, but learn the second step on a large dataset of natural images, using a neural network. We will show that this approach outperforms the current state-of-the-art on a large dataset of artificially blurred images. We demonstrate the practical applicability of our method in a real-world example with photographic out-of-focus blur.",
"",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80 corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.",
"Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions.",
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"We propose a deep Convolutional Neural Networks (CNN) method for natural image matting. Our method takes results of the closed form matting, results of the KNN matting and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs, and reconstructed alpha mattes. We analyze pros and cons of the closed form matting, and the KNN matting in terms of local and nonlocal principle, and show that they are complementary to each other. A major benefit of our method is that it can “recognize” different local image structures, and then combine results of local (closed form matting), and nonlocal (KNN matting) matting effectively to achieve higher quality alpha mattes than both of its inputs. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. In addition, our method has achieved the highest ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors.",
"Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.",
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite."
]
} |
1612.01380 | 2952781046 | While machine learning approaches to image restoration offer great promise, current methods risk training models fixated on performing well only for image corruption of a particular level of difficulty---such as a certain level of noise or blur. First, we examine the weakness of conventional "fixated" models and demonstrate that training general models to handle arbitrary levels of corruption is indeed non-trivial. Then, we propose an on-demand learning algorithm for training image restoration models with deep convolutional neural networks. The main idea is to exploit a feedback mechanism to self-generate training instances where they are needed most, thereby learning models that can generalize across difficulty levels. On four restoration tasks---image inpainting, pixel interpolation, image deblurring, and image denoising---and three diverse datasets, our approach consistently outperforms both the status quo training procedure and curriculum learning alternatives. | The fixation problem is also observed in recent denoising work, e.g., @cite_18 @cite_42 , but without a dedicated and general solution. Burger al @cite_18 attempt to train a network on patches corrupted by noise with different noise levels by giving the noise hyper-parameter as an additional input to the network. While the model can better denoise images at different noise levels, assuming the noise level is known at test time is problematic. Recently, Mao al @cite_42 explore how the large capacity of a very deep network can help generalize across noise levels, but accuracy still declines noticeably from the fixated counterpart. | {
"cite_N": [
"@cite_18",
"@cite_42"
],
"mid": [
"2037642501",
"2964046669"
],
"abstract": [
"Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well.",
"In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and deconvolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. Deconvolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and deconvolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, the skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to deconvolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than recent state-of-the-art methods."
]
} |
1612.01380 | 2952781046 | While machine learning approaches to image restoration offer great promise, current methods risk training models fixated on performing well only for image corruption of a particular level of difficulty---such as a certain level of noise or blur. First, we examine the weakness of conventional "fixated" models and demonstrate that training general models to handle arbitrary levels of corruption is indeed non-trivial. Then, we propose an on-demand learning algorithm for training image restoration models with deep convolutional neural networks. The main idea is to exploit a feedback mechanism to self-generate training instances where they are needed most, thereby learning models that can generalize across difficulty levels. On four restoration tasks---image inpainting, pixel interpolation, image deblurring, and image denoising---and three diverse datasets, our approach consistently outperforms both the status quo training procedure and curriculum learning alternatives. | Active learning is another way for a learner to steer its own learning. Active learning selects examples that seem most valuable for human labeling, and has been widely used in computer vision to mitigate manual annotation costs @cite_49 @cite_5 @cite_8 @cite_38 @cite_44 @cite_6 @cite_9 @cite_36 . Unlike active learning, our approach uses no human annotation, but instead actively synthesizes training instances of different corruption levels based on the progress of training. All our training data can be obtained for free" and the ground-truth (original uncorrupted image) is always available. | {
"cite_N": [
"@cite_38",
"@cite_8",
"@cite_36",
"@cite_9",
"@cite_6",
"@cite_44",
"@cite_49",
"@cite_5"
],
"mid": [
"2026566343",
"2167828456",
"2520049838",
"1932070659",
"2262342046",
"",
"2027248184",
"2012878613"
],
"abstract": [
"Active learning and crowdsourcing are promising ways to efficiently build up training sets for object recognition, but thus far techniques are tested in artificially controlled settings. Typically the vision researcher has already determined the dataset's scope, the labels \"actively\" obtained are in fact already known, and or the crowd-sourced collection process is iteratively fine-tuned. We present an approach for live learning of object detectors, in which the system autonomously refines its models by actively requesting crowd-sourced annotations on images crawled from the Web. To address the technical issues such a large-scale system entails, we introduce a novel part-based detector amenable to linear classifiers, and show how to identify its most uncertain instances in sub-linear time with a hashing-based solution. We demonstrate the approach with experiments of unprecedented scale and autonomy, and show it successfully improves the state-of-the-art for the most challenging objects in the PASCAL VOC benchmark. In addition, we show our detector competes well with popular nonlinear classifiers that are much more expensive to train.",
"In many image video web classification problems, we have access to a large number of unlabeled samples. However, it is typically expensive and time consuming to obtain labels for the samples. Active learning is the problem of progressively selecting and annotating the most informative unlabeled samples, in order to obtain a high classification performance. Most existing active learning algorithms select only one sample at a time prior to retraining the classifier. Hence, they are computationally expensive and cannot take advantage of parallel labeling systems such as Mechanical Turk. On the other hand, algorithms that allow the selection of multiple samples prior to retraining the classifier, may select samples that have significant information overlap or they involve solving a non-convex optimization. More importantly, the majority of active learning algorithms are developed for a certain classifier type such as SVM. In this paper, we develop an efficient active learning framework based on convex programming, which can select multiple samples at a time for annotation. Unlike the state of the art, our algorithm can be used in conjunction with any type of classifiers, including those of the family of the recently proposed Sparse Representation-based Classification (SRC). We use the two principles of classifier uncertainty and sample diversity in order to guide the optimization program towards selecting the most informative unlabeled samples, which have the least information overlap. Our method can incorporate the data distribution in the selection process by using the appropriate dissimilarity between pairs of samples. We show the effectiveness of our framework in person detection, scene categorization and face recognition on real-world datasets.",
"Multi-label learning is a challenging problem in computer vision field. In this paper, we propose a novel active learning approach to reduce the annotation costs greatly for multi-label classification. State-of-the-art active learning methods either annotate all the relevant samples without diagnosing discriminative information in the labels or annotate only limited discriminative samples manually, that has weak immunity for the outlier labels. To overcome these problems, we propose a multi-label active learning method based on Maximum Correntropy Criterion (MCC) by merging uncertainty and representativeness. We use the the labels of labeled data and the prediction labels of unknown data to enhance the uncertainty and representativeness measurement by merging strategy, and use the MCC to alleviate the influence of outlier labels for discriminative labeling. Experiments on several challenging benchmark multi-label datasets show the superior performance of our proposed method to the state-of-the-art methods.",
"Current visual recognition algorithms are “hungry” for data but massive annotation is extremely costly. Therefore, active learning algorithms are required that reduce labeling efforts to a minimum by selecting examples that are most valuable for labeling. In active learning, all categories occurring in collected data are usually assumed to be known in advance and experts should be able to label every requested instance. But do these assumptions really hold in practice? Could you name all categories in every image?",
"In this paper, we introduce a new general strategy for active learning. The key idea of our approach is to measure the expected change of model outputs, a concept that generalizes previous methods based on expected model change and incorporates the underlying data distribution. For each example of an unlabeled set, the expected change of model predictions is calculated and marginalized over the unknown label. This results in a score for each unlabeled example that can be used for active learning with a broad range of models and learning algorithms. In particular, we show how to derive very efficient active learning methods for Gaussian process regression, which implement this general strategy, and link them to previous methods. We analyze our algorithms and compare them to a broad range of previous active learning strategies in experiments showing that they outperform state-of-the-art on well-established benchmark datasets in the area of visual object recognition.",
"",
"Discriminative methods for visual object category recognition are typically non-probabilistic, predicting class labels but not directly providing an estimate of uncertainty. Gaussian Processes (GPs) provide a framework for deriving regression techniques with explicit uncertainty models; we show here how Gaussian Processes with covariance functions defined based on a Pyramid Match Kernel (PMK) can be used for probabilistic object category recognition. Our probabilistic formulation provides a principled way to learn hyperparameters, which we utilize to learn an optimal combination of multiple covariance functions. It also offers confidence estimates at test points, and naturally allows for an active learning paradigm in which points are optimally selected for interactive labeling. We show that with an appropriate combination of kernels a significant boost in classification performance is possible. Further, our experiments indicate the utility of active learning with probabilistic predictive models, especially when the amount of training data labels that may be sought for a category is ultimately very small.",
"Active learning reduces the labeling cost by iteratively selecting the most valuable data to query their labels. It has attracted a lot of interests given the abundance of unlabeled data and the high cost of labeling. Most active learning approaches select either informative or representative unlabeled instances to query their labels, which could significantly limit their performance. Although several active learning algorithms were proposed to combine the two query selection criteria, they are usually ad hoc in finding unlabeled instances that are both informative and representative. We address this limitation by developing a principled approach, termed QUIRE, based on the min-max view of active learning. The proposed approach provides a systematic way for measuring and combining the informativeness and representativeness of an unlabeled instance. Further, by incorporating the correlation among labels, we extend the QUIRE approach to multi-label learning by actively querying instance-label pairs. Extensive experimental results show that the proposed QUIRE approach outperforms several state-of-the-art active learning approaches in both single-label and multi-label learning."
]
} |
1612.01450 | 2951672944 | Human creativity is the ultimate driving force behind scientific progress. While the building blocks of innovations are often embodied in existing knowledge, it is creativity that blends seemingly disparate ideas. Existing studies have made striding advances in quantifying creativity of scientific publications by investigating their citation relationships. Yet, little is known hitherto about the underlying mechanisms governing scientific creative processes, largely due to that a paper's references, at best, only partially reflect its authors' actual information consumption. This work represents an initial step towards fine-grained understanding of creative processes in scientific enterprise. In specific, using two web-scale longitudinal datasets (120.1 million papers and 53.5 billion web requests spanning 4 years), we directly contrast authors' information consumption behaviors against their knowledge products. We find that, of 59.0 papers across all scientific fields, 25.7 of their creativity can be readily explained by information consumed by their authors. Further, by leveraging these findings, we develop a predictive framework that accurately identifies the most critical knowledge to fostering target scientific innovations. We believe that our framework is of fundamental importance to the study of scientific creativity. It promotes strategies to stimulate and potentially automate creative processes, and provides insights towards more effective designs of information recommendation platforms. | Another use of papers' reference relationships is to create citation-based maps of science or knowledge flow maps @cite_2 @cite_23 , which help categorize science and understand papers' trans-disciplinary impact. However, these insights do not help explain creativity of individual scientific work. | {
"cite_N": [
"@cite_23",
"@cite_2"
],
"mid": [
"2283604078",
"2164998314"
],
"abstract": [
"In recent years scholars have built maps of science by connecting the academic fields that cite each other, are cited together, or that cite a similar literature. But since scholars cannot always publish in the fields they cite, or that cite them, these science maps are only rough proxies for the potential of a scholar, organization, or country, to enter a new academic field. Here we use a large dataset of scholarly publications disambiguated at the individual level to create a map of science-or research space-where links connect pairs of fields based on the probability that an individual has published in both of them. We find that the research space is a significantly more accurate predictor of the fields that individuals and organizations will enter in the future than citation based science maps. At the country level, however, the research space and citations based science maps are equally accurate. These findings show that data on career trajectories-the set of fields that individuals have previously published in-provide more accurate predictors of future research output for more focalized units-such as individuals or organizations-than citation based science maps.",
"To comprehend the multipartite organization of large-scale biological and social systems, we introduce an information theoretic approach that reveals community structure in weighted and directed networks. We use the probability flow of random walks on a network as a proxy for information flows in the real system and decompose the network into modules by compressing a description of the probability flow. The result is a map that both simplifies and highlights the regularities in the structure and their relationships. We illustrate the method by making a map of scientific communication as captured in the citation patterns of >6,000 journals. We discover a multicentric organization with fields that vary dramatically in size and degree of integration into the network of science. Along the backbone of the network—including physics, chemistry, molecular biology, and medicine—information flows bidirectionally, but the map reveals a directional pattern of citation from the applied fields to the basic sciences."
]
} |
1612.01511 | 2559782019 | Measuring centrality in a social network, especially in bipartite mode, poses many challenges, for example, the requirement of full knowledge of the network topology, and the lack of properly detecting top-k behavioral representative users. To overcome the above mentioned challenges, we propose HellRank, an accurate centrality measure for identifying central nodes in bipartite social networks. HellRank is based on the Hellinger distance between two nodes on the same side of a bipartite network. We theoretically analyze the impact of this distance on a bipartite network and find upper and lower bounds for it. The computation of the HellRank centrality measure can be distributed, by letting each node uses local information only on its immediate neighbors. Consequently, one does not need a central entity that has full knowledge of the network topological structure. We experimentally evaluate the performance of the HellRank measure in correlation with other centrality measures on real-world networks. The results show partial ranking similarity between the HellRank and the other conventional metrics according to the Kendall and Spearman rank correlation coefficient. | However, most existing work focused on studying the network topology and ignored the topic information. @cite_6 aimed to find representative users from the information spreading perspective and @cite_9 studied the network sampling problem in the dynamic environment. @cite_55 presented a sampling-based algorithm to efficiently explore a user's ego network and to quickly approximate quantities of interest. @cite_27 focused on the use of the social structure of the user community, user profiles and previous behaviors, as an additional source of information in building recommender systems. @cite_10 presented a formal definition of the problem of sampling representative users from social network. | {
"cite_N": [
"@cite_55",
"@cite_9",
"@cite_6",
"@cite_27",
"@cite_10"
],
"mid": [
"2052618365",
"2963316155",
"146500857",
"1768902233",
"592330355"
],
"abstract": [
"As online social networking emerges, there has been increased interest to utilize the underlying network structure as well as the available information on social peers to improve the information needs of a user. In this paper, we focus on improving the performance of information collection from the neighborhood of a user in a dynamic social network. We introduce sampling-based algorithms to efficiently explore a user's social network respecting its structure and to quickly approximate quantities of interest. We introduce and analyze variants of the basic sampling scheme exploring correlations across our samples. Models of centralized and distributed social networks are considered. We show that our algorithms can be utilized to rank items in the neighborhood of a user, assuming that information for each user in the network is available. Using real and synthetic data sets, we validate the results of our analysis and demonstrate the efficiency of our algorithms in approximating quantities of interest. The methods we describe are general and can probably be easily adopted in a variety of strategies aiming to efficiently collect information from a social graph.",
"Network sampling is integral to the analysis of social, information, and biological networks. Since many real-world networks are massive in size, continuously evolving, and or distributed in nature, the network structure is often sampled in order to facilitate study. For these reasons, a more thorough and complete understanding of network sampling is critical to support the field of network science. In this paper, we outline a framework for the general problem of network sampling by highlighting the different objectives, population and units of interest, and classes of network sampling methods. In addition, we propose a spectrum of computational models for network sampling methods, ranging from the traditionally studied model based on the assumption of a static domain to a more challenging model that is appropriate for streaming domains. We design a family of sampling methods based on the concept of graph induction that generalize across the full spectrum of computational models (from static to streaming) while efficiently preserving many of the topological properties of the input graphs. Furthermore, we demonstrate how traditional static sampling algorithms can be modified for graph streams for each of the three main classes of sampling methods: node, edge, and topology-based sampling. Experimental results indicate that our proposed family of sampling methods more accurately preserve the underlying properties of the graph in both static and streaming domains. Finally, we study the impact of network sampling algorithms on the parameter estimation and performance evaluation of relational classification algorithms.",
"We study the problem of identifying representative users in social networks from an information spreading perspective. While traditional network measures such as node degree and PageRank have been shown to work well for selecting seed users, the resulting nodes often have high neighbour overlap and thus are not optimal in terms of maximising spreading coverage. In this paper we extend a recently proposed statistical learning approach called skeleton learning (SKE) to graph datasets. The idea is to associate each node with a random representative node through Bayesian inference. By doing so, a prior distribution defined over the graph nodes emerges where representatives with high probabilities lie in key positions and are mutually exclusive, reducing neighbour overlap. Evaluation with information diffusion experiments on real scientific collaboration networks shows that seeds selected using SKE are more effective spreaders compared with those selected with traditional ranking algorithms and a state-of-the-art degree discount heuristic.",
"We present a hybrid method for an expert recommendation system that integrates the characteristics of content-based recommendation algorithms into a social network-based collaborative filtering system. Our method aims at improving the accuracy of the recommendation prediction by considering the social aspect of experts' behaviors. For this purpose, social communities of experts are first detected by applying social network analysis and using factors such as experience, background, knowledge level, and personal preferences of experts. Representative members of communities are then identified using a network centrality measure. Finally, a recommendation is made to relate an information item, for which a user is seeking for an expert, to the representatives of the most relevant community. Further from an expert's perspective, she he has been suggested to work on relevant information items that fall under her his expertise and interests.",
"Finding a subset of users to statistically represent the original social network is a fundamental issue in Social Network Analysis (SNA). The problem has not been extensively studied in existing literature. In this paper, we present a formal definition of the problem of sampling representative users from social network. We propose two sampling models and theoretically prove their NP-hardness. To efficiently solve the two models, we present an efficient algorithm with provable approximation guarantees. Experimental results on two datasets show that the proposed models for sampling representative users significantly outperform (+6 -23 in terms of Precision@100) several alternative methods using authority or structure information only. The proposed algorithms are also effective in terms of time complexity. Only a few seconds are needed to sampling 300 representative users from a network of 100,000 users. All data and codes are publicly available."
]
} |
1612.01511 | 2559782019 | Measuring centrality in a social network, especially in bipartite mode, poses many challenges, for example, the requirement of full knowledge of the network topology, and the lack of properly detecting top-k behavioral representative users. To overcome the above mentioned challenges, we propose HellRank, an accurate centrality measure for identifying central nodes in bipartite social networks. HellRank is based on the Hellinger distance between two nodes on the same side of a bipartite network. We theoretically analyze the impact of this distance on a bipartite network and find upper and lower bounds for it. The computation of the HellRank centrality measure can be distributed, by letting each node uses local information only on its immediate neighbors. Consequently, one does not need a central entity that has full knowledge of the network topological structure. We experimentally evaluate the performance of the HellRank measure in correlation with other centrality measures on real-world networks. The results show partial ranking similarity between the HellRank and the other conventional metrics according to the Kendall and Spearman rank correlation coefficient. | @cite_57 studied how to infer social probabilities of influence by developing an algorithm to scan over the log of actions of social network users using real data. @cite_14 @cite_17 focused on the influence maximization problem to model the social influence on large networks. TwitterRank, as an extension of PageRank metric, was proposed by @cite_60 to identify influential users in Twitter. @cite_83 used the Susceptible-Infected-Recovered (SIR) model to examine the spreading influence of the nodes ranked by different influence measures. @cite_23 identified influencers using joint influence powers through Influence network. @cite_15 identified influencial users by using user trust networks. @cite_32 proposed the weighted LeaderRank technique by replacing the standard random walk to a biased random walk. @cite_29 presented a novel analysis on the statistical simplex as a manifold with boundary and applied the proposed technique to social network analysis to rank a subset of influencer nodes. @cite_40 proposed a new approach to incorporate users' reply relationship, conversation content and response immediacy to identify influential users of online health care community. @cite_49 used multi-attribute and homophily characteristics in a new method to identify influential nodes in complex networks. | {
"cite_N": [
"@cite_14",
"@cite_60",
"@cite_29",
"@cite_32",
"@cite_57",
"@cite_40",
"@cite_83",
"@cite_23",
"@cite_49",
"@cite_15",
"@cite_17"
],
"mid": [
"1512602432",
"2076219102",
"270428378",
"1971941321",
"2073926352",
"2055943171",
"2113330929",
"2091800279",
"2083842271",
"2019610697",
"2107559689"
],
"abstract": [
"Social networks often serve as a medium for the diffusion of ideas or innovations. An individual's decision whether to adopt a product or innovation will be highly dependent on the choices made by the individual's peers or neighbors in the social network. In this work, we study the game of innovation diffusion with multiple competing innovations such as when multiple companies market competing products using viral marketing. Our first contribution is a natural and mathematically tractable model for the diffusion of multiple innovations in a network. We give a (1-1 e) approximation algorithm for computing the best response to an opponent's strategy, and prove that the \"price of competition\" of this game is at most 2. We also discuss \"first mover\" strategies which try to maximize the expected diffusion against perfect competition. Finally, we give an FPTAS for the problem of maximizing the influence of a single player when the underlying graph is a tree.",
"This paper focuses on the problem of identifying influential users of micro-blogging services. Twitter, one of the most notable micro-blogging services, employs a social-networking model called \"following\", in which each user can choose who she wants to \"follow\" to receive tweets from without requiring the latter to give permission first. In a dataset prepared for this study, it is observed that (1) 72.4 of the users in Twitter follow more than 80 of their followers, and (2) 80.5 of the users have 80 of users they are following follow them back. Our study reveals that the presence of \"reciprocity\" can be explained by phenomenon of homophily. Based on this finding, TwitterRank, an extension of PageRank algorithm, is proposed to measure the influence of users in Twitter. TwitterRank measures the influence taking both the topical similarity between users and the link structure into account. Experimental results show that TwitterRank outperforms the one Twitter currently uses and other related algorithms, including the original PageRank and Topic-sensitive PageRank.",
"Sparsity in R^m has been widely explored in machine learning. We study sparsity on a statistical simplex consisting of all categorical distributions. This is different from the case in R^m because such a simplex is a Riemannian manifold, a curved space. A learner with sparse constraints should be likely to fall to its low-dimensional boundaries. We present a novel analysis on the statistical simplex as a manifold with boundary. The main contribution is an explicit view of the learning dynamics in between high-dimensional models in the interior of the simplex and low-dimensional models on its boundaries. We prove the differentiability of the cost function, the natural gradient with respect to the Riemannian structure, and convexity around the singular regions. We uncover an interesting relationship with L1 regularization. We apply the proposed technique to social network analysis. Given a directed graph, the task is to rank a subset of influencer nodes. Here, sparsity means that the top-ranked nodes should present diversity in the sense of minimizing influence overlap. We present a ranking algorithm based on the natural gradient. It can scale up to graph datasets with millions of nodes. On real large networks, the top-ranked nodes are the most informative among several commonly-used techniques.",
"Identifying influential spreaders is crucial for understanding and controlling spreading processes on social networks. Via assigning degree-dependent weights onto links associated with the ground node, we proposed a variant to a recent ranking algorithm named LeaderRank (, 2011). According to the simulations on the standard SIR model, the weighted LeaderRank performs better than LeaderRank in three aspects: (i) the ability to find out more influential spreaders; (ii) the higher tolerance to noisy data; and (iii) the higher robustness to intentional attacks.",
"Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance.",
"Due to the revolutionary development of Web 2.0 technology, individual users have become major contributors of Web content in online social media. In light of the growing activities, how to measure a user’s influence to other users in online social media becomes increasingly important. This research need is urgent especially in the online healthcare community since positive influence can be beneficial while negative influence may cause-negative impact on other users of the same community. In this article, a research framework was proposed to study user influence within the online healthcare community. We proposed a new approach to incorporate users’ reply relationship, conversation content and response immediacy which capture both explicit and implicit interaction between users to identify influential users of online healthcare community. A weighted social network is developed to represent the influence between users. We tested our proposed techniques thoroughly on two medical support forums. Two algorithms UserRank and Weighted in-degree are benchmarked with PageRank and in-degree. Experiment results demonstrated the validity and effectiveness of our proposed approaches.",
"Identifying influential nodes in very large-scale directed networks is a big challenge relevant to disparate applications, such as accelerating information propagation, controlling rumors and diseases, designing search engines, and understanding hierarchical organization of social and biological networks. Known methods range from node centralities, such as degree, closeness and betweenness, to diffusion-based processes, like PageRank and LeaderRank. Some of these methods already take into account the influences of a node’s neighbors but do not directly make use of the interactions among it’s neighbors. Local clustering is known to have negative impacts on the information spreading. We further show empirically that it also plays a negative role in generating local connections. Inspired by these facts, we propose a local ranking algorithm named ClusterRank, which takes into account not only the number of neighbors and the neighbors’ influences, but also the clustering coefficient. Subject to the susceptible-infected-recovered (SIR) spreading model with constant infectivity, experimental results on two directed networks, a social network extracted from delicious.com and a large-scale short-message communication network, demonstrate that the ClusterRank outperforms some benchmark algorithms such as PageRank and LeaderRank. Furthermore, ClusterRank can also be applied to undirected networks where the superiority of ClusterRank is significant compared with degree centrality and k-core decomposition. In addition, ClusterRank, only making use of local information, is much more efficient than global methods: It takes only 191 seconds for a network with about nodes, more than 15 times faster than PageRank.",
"With the tremendous popularity of social networking sites in this era of Web 2.0, increasingly more users are contributing their comments and opinions about products, people, organizations, and many other entities. These online comments often have direct influence on consumers' buying decisions and the public's impressions of enterprises. As a result, enterprises have begun to explore the feasibility of using social networking sites as platforms to conduct targeted marking and enterprise reputation management for e-commerce and e-business. As indicated from recent marketing research, the joint influential power of a small group of active users could have considerable impact on a large number of consumers' buying decisions and the public's perception of the capabilities of enterprises. This paper illustrates a novel method that can effectively discover the most influential users from social networking sites (SNS). In particular, the general method of mining the influence network from SNS and the computational models of mathematical programming for discovering the user groups with max joint influential power are proposed. The empirical evaluation with real data extracted from social networking sites shows that the proposed method can effectively identify the most influential groups when compared to the benchmark methods. This study opens the door to effectively conducting targeted marketing and enterprise reputation management on social networking sites.",
"In complex networks, identifying influential nodes is the very important part of reliability analysis, which has been a key issue in analyzing the structural organization of a network. In this paper, a new evaluation method of node importance in complex networks based on technique for order performance by similarity to ideal solution (TOPSIS) approach is proposed. TOPSIS as a multiple attribute decision making (MADM) technique has been an important branch of decision making since then. In addition, TOPSIS is first applied to identify influential nodes in a complex network in this open issue. In different types of networks in which the information goes by different ways, we consider several different centrality measures as the multi-attribute of complex network in TOPSIS application. TOPSIS is utilized to aggregate the multi-attribute to obtain the evaluation of node importance of each node. It is not limited to only one centrality measure, but considers different centrality measures, because every centrality measure has its own disadvantage and limitation. Then, we use the Susceptible–Infected (SI) model to evaluate the performance. Numerical examples are given to show the efficiency and practicability of the proposed method.",
"The target of viral marketing on the platform of popular online social networks is to rapidly propagate marketing information at lower cost and increase sales, in which a key problem is how to precisely discover the most influential users in the process of information diffusion. A novel method is proposed in this paper for helping companies to identify such users as seeds to maximize information diffusion in the viral marketing. Firstly, the user trust network oriented to viral marketing and users’ combined interest degree in the network including isolated users are extensively defined. Next, we construct a model considering the time factor to simulate the process of information diffusion in viral marketing and propose a dynamic algorithm description. Finally, experiments are conducted with a real dataset extracted from the famous SNS website Epinions. The experimental results indicate that the proposed algorithm has better scalability and is less time-consuming. Compared with the classical model, the proposed algorithm achieved a better performance than does the classical method on the two aspects of network coverage rate and time-consumption in our four sub-datasets.",
"In large social networks, nodes (users, entities) are influenced by others for various reasons. For example, the colleagues have strong influence on one's work, while the friends have strong influence on one's daily life. How to differentiate the social influences from different angles(topics)? How to quantify the strength of those social influences? How to estimate the model on real large networks? To address these fundamental questions, we propose Topical Affinity Propagation (TAP) to model the topic-level social influence on large networks. In particular, TAP can take results of any topic modeling and the existing network structure to perform topic-level influence propagation. With the help of the influence analysis, we present several important applications on real data sets such as 1) what are the representative nodes on a given topic? 2) how to identify the social influences of neighboring nodes on a particular node? To scale to real large networks, TAP is designed with efficient distributed learning algorithms that is implemented and tested under the Map-Reduce framework. We further present the common characteristics of distributed learning algorithms for Map-Reduce. Finally, we demonstrate the effectiveness and efficiency of TAP on real large data sets."
]
} |
1612.01511 | 2559782019 | Measuring centrality in a social network, especially in bipartite mode, poses many challenges, for example, the requirement of full knowledge of the network topology, and the lack of properly detecting top-k behavioral representative users. To overcome the above mentioned challenges, we propose HellRank, an accurate centrality measure for identifying central nodes in bipartite social networks. HellRank is based on the Hellinger distance between two nodes on the same side of a bipartite network. We theoretically analyze the impact of this distance on a bipartite network and find upper and lower bounds for it. The computation of the HellRank centrality measure can be distributed, by letting each node uses local information only on its immediate neighbors. Consequently, one does not need a central entity that has full knowledge of the network topological structure. We experimentally evaluate the performance of the HellRank measure in correlation with other centrality measures on real-world networks. The results show partial ranking similarity between the HellRank and the other conventional metrics according to the Kendall and Spearman rank correlation coefficient. | In the specific area of identifying influential users in bipartite networks, @cite_61 presented a dynamical model for rewiring in bipartite networks and obtained time-dependent degree distributions. @cite_31 defined a bipartite clustering coefficient by taking differently structured clusters into account, that can find important nodes across communities. The concept of clustering coefficient will be discussed in further detail in the Section . | {
"cite_N": [
"@cite_61",
"@cite_31"
],
"mid": [
"2009102084",
"1980467021"
],
"abstract": [
"We present a dynamical model for rewiring and attachment in bipartite networks. Edges are placed between nodes that belong to catalogs that can either be fixed in size or growing in size. The model is motivated by an empirical study of data from the video rental service Netflix, which invites its users to give ratings to the videos available in its catalog. We find that the distribution of the number of ratings given by users and that of the number of ratings received by videos both follow a power law with an exponential cutoff. We also examine the activity patterns of Netflix users and find bursts of intense video-rating activity followed by long periods of inactivity. We derive ordinary differential equations to model the acquisition of edges by the nodes over time and obtain the corresponding time-dependent degree distributions. We then compare our results with the Netflix data and find good agreement. We conclude with a discussion of how catalog models can be used to study systems in which agents are ...",
"The identification of influential nodes in complex network can be very challenging. If the network has a community structure, centrality measures may fail to identify the complete set of influential nodes, as the hubs and other central nodes of the network may lie inside only one community. Here we define a bipartite clustering coefficient that, by taking differently structured clusters into account, can find important nodes across communities."
]
} |
1612.01480 | 2611462407 | We construct @math kernel, which generalizes the classical Gaussian RBF kernel to the case of incomplete data. We model the uncertainty contained in missing attributes making use of data distribution and associate every point with a conditional probability density function. This allows to embed incomplete data into the function space and to define a kernel between two missing data points based on scalar product in @math . Experiments show that introduced kernel applied to SVM classifier gives better results than other state-of-the-art methods, especially in the case when large number of features is missing. Moreover, it is easy to implement and can be used together with any kernel approaches with no additional modifications. | The most common approach to learning from incomplete data is known as deterministic imputation @cite_11 . In this two-step procedure, the missing features are filled first, and only then a standard classifier is applied to the complete data @cite_18 . Although the imputation-based techniques are easy to use for practitioners, they lead to the loss of information which features were missing and do not take into account the reasons of missingness. To preserve the information of missing attributes, one can use an additional vector of binary flags, indicating which coordinates were missing. | {
"cite_N": [
"@cite_18",
"@cite_11"
],
"mid": [
"2044758663",
"321726205"
],
"abstract": [
"Preface.PART I: OVERVIEW AND BASIC APPROACHES.Introduction.Missing Data in Experiments.Complete-Case and Available-Case Analysis, Including Weighting Methods.Single Imputation Methods.Estimation of Imputation Uncertainty.PART II: LIKELIHOOD-BASED APPROACHES TO THE ANALYSIS OF MISSING DATA.Theory of Inference Based on the Likelihood Function.Methods Based on Factoring the Likelihood, Ignoring the Missing-Data Mechanism.Maximum Likelihood for General Patterns of Missing Data: Introduction and Theory with Ignorable Nonresponse.Large-Sample Inference Based on Maximum Likelihood Estimates.Bayes and Multiple Imputation.PART III: LIKELIHOOD-BASED APPROACHES TO THE ANALYSIS OF MISSING DATA: APPLICATIONS TO SOME COMMON MODELS.Multivariate Normal Examples, Ignoring the Missing-Data Mechanism.Models for Robust Estimation.Models for Partially Classified Contingency Tables, Ignoring the Missing-Data Mechanism.Mixed Normal and Nonnormal Data with Missing Values, Ignoring the Missing-Data Mechanism.Nonignorable Missing-Data Models.References.Author Index.Subject Index.",
"Part 1. A Gentle Introduction to Missing Data. The Concept of Missing Data. The Prevalence of Missing Data. Why Data Might Be Missing. The Impact of Missing Data. What's Missing in the Missing Data Literature? A Cost-Benefit Approach to Missing Data. Missing Data - Not Just for Statisticians Anymore. Part 2. Consequences of Missing Data. Three General Consequences of Missing Data. Consequences of Missing Data on Construct Validity. Consequences of Missing Data on Internal Validity. Consequences on Causal Generalization. Summary. Part 3. Classifying Missing Data. \"The Silence That Betokens\". The Current Classification System: Mechanisms of Missing Data. Expanding the Classification System. Summary. Part 4. Preventing Missing Data by Design. Overall Study Design. Characteristics of the Target Population and the Sample. Data Collection and Measurement. Treatment Implementation. Data Entry Process. Summary. Part 5. Diagnostic Procedures. Traditional Diagnostics. Dummy Coding Missing Data. Numerical Diagnostic Procedures. Graphical Diagnostic Procedures. Summary. Part 6. The Selection of Data Analytic Procedures. Preliminary Steps. Decision Making. Summary. Part 7. Data Deletion Methods for Handling Missing Data. Data Sets. Complete Case Method. Available Case Method. Available Item Method. Individual Growth Curve Analysis. Multisample Analyses. Summary. Part 8. Data Augmentation Procedures. Model-Based Procedures. Markov Chain Monte Carlo. Adjustment Methods. Summary. Part 9. Single Imputation Procedures. Constant Replacement Methods. Random Value Imputation. Nonrandom Value Imputation: Single Condition. Nonrandom Value Imputation: Multiple Conditions. Summary. Part 10. Multiple Imputation. The MI Process. Summary. Part 11. Reporting Missing Data and Results. APA Task Force Recommendations. Missing Data and Study Stages. TFSI Recommendations and Missing Data. Reporting Format. Summary. Part 12. Epilogue."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | In graph theory, a monoplex network (graph) can be represented as an ordered pair, @math , where @math is a set of @math vertices or nodes, and @math is a set of @math edges or links between the vertices @cite_14 . An adjacency matrix representing a graph @math is denoted by @math , where @math if there is an edge between vertices @math and @math , and @math otherwise. Most of the real-world networks that we consider throughout the paper are represented as edge-weighted graphs , @math , where @math assigns real values to edges. In this case, the adjacency matrix instead of being a binary matrix, is a real one i.e., @math , with entries characterizing the strength of association or interaction between the network nodes. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1479863711"
],
"abstract": [
"1. Fundamental Concepts. What Is a Graph? Paths, Cycles, and Trails. Vertex Degrees and Counting. Directed Graphs. 2. Trees and Distance. Basic Properties. Spanning Trees and Enumeration. Optimization and Trees. 3. Matchings and Factors. Matchings and Covers. Algorithms and Applications. Matchings in General Graphs. 4. Connectivity and Paths. Cuts and Connectivity. k-connected Graphs. Network Flow Problems. 5. Coloring of Graphs. Vertex Colorings and Upper Bounds. Structure of k-chromatic Graphs. Enumerative Aspects. 6. Planar Graphs. Embeddings and Euler's Formula. Characterization of Planar Graphs. Parameters of Planarity. 7. Edges and Cycles. Line Graphs and Edge-Coloring. Hamiltonian Cycles. Planarity, Coloring, and Cycles. 8. Additional Topics (Optional). Perfect Graphs. Matroids. Ramsey Theory. More Extremal Problems. Random Graphs. Eigenvalues of Graphs. Appendix A: Mathematical Background. Appendix B: Optimization and Complexity. Appendix C: Hints for Selected Exercises. Appendix D: Glossary of Terms. Appendix E: Supplemental Reading. Appendix F: References. Indices."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | Although, there is no universally accepted mathematical definition of the community notion in graphs, the probably most commonly accepted definition is the following: a community is a set of nodes in a network that are connected more densely among each other than they are to the rest of the network @cite_0 . Hence, the problem of community detection is as follows: given an adjacency matrix @math of one network with @math nodes and @math communities, find the community assignment of all nodes, denoted by @math , where @math if nodes @math belongs to community @math , and @math otherwise. We consider the case of non-overlapping communities, where a node can belong to only one community, i.e., @math . | {
"cite_N": [
"@cite_0"
],
"mid": [
"1807089695"
],
"abstract": [
"Assessing the navigability of interconnected networks (transporting information, people, or goods) under eventual random failures is of utmost importance to design and protect critical infrastructures. Random walks are a good proxy to determine this navigability, specifically the coverage time of random walks, which is a measure of the dynamical functionality of the network. Here, we introduce the theoretical tools required to describe random walks in interconnected networks accounting for structure and dynamics inherent to real systems. We develop an analytical approach for the covering time of random walks in interconnected networks and compare it with extensive Monte Carlo simulations. Generally speaking, interconnected networks are more resilient to random failures than their individual layers per se, and we are able to quantify this effect. As an application––which we illustrate by considering the public transport of London––we show how the efficiency in exploring the multiplex critically depends on layers’ topology, interconnection strengths, and walk strategy. Our findings are corroborated by data-driven simulations, where the empirical distribution of check-ins and checks-out is considered and passengers travel along fastest paths in a network affected by real disruptions. These findings are fundamental for further development of searching and navigability strategies in real interconnected systems."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | aims to group nodes into partitions such that the cut size , i.e., the total number of edges between any two partitions, is minimal. Two widely used graph partitioning algorithms that also take into account the size of partitions are Ratio Cut and Normalized Cut @cite_42 . Graph partitioning algorithms can be alternatively defined as spectral algorithms in which the objective is to partition the nodes into communities based on their eigenvectors obtained from eigendecomposition of graph Laplacian matrix @cite_36 . | {
"cite_N": [
"@cite_36",
"@cite_42"
],
"mid": [
"2132914434",
"2121947440"
],
"abstract": [
"In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.",
"We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | In the goal is to reveal network communities and their hierarchical structure based on a similarity (usually topological) measure computed between pairs of nodes @cite_32 . algorithms are among the most popular ones. Modularity was designed to measure the strength of partition of a network into communities. It is defined as a fraction of the edges that fall within the community minus the expected fraction when these edges are randomly distributed @cite_44 @cite_6 . Various algorithms have been proposed for modularity optimization, including greedy techniques, simulated annealing, spectral optimization, etc. @cite_16 . | {
"cite_N": [
"@cite_44",
"@cite_16",
"@cite_32",
"@cite_6"
],
"mid": [
"1971421925",
"2127048411",
"1480376833",
"2151936673"
],
"abstract": [
"A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.",
"The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.",
"",
"Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | methods aims at fitting the generative model to the network data based on some hypothesis. Most commonly used statistical inference method for community detection is the stochastic block model , that aims to approximate a given adjacency matrix by a block structure @cite_9 . Each block in the model represents a community. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2102907934"
],
"abstract": [
"Abstract A stochastic model is proposed for social networks in which the actors in a network are partitioned into subgroups called blocks. The model provides a stochastic generalization of the blockmodel. Estimation techniques are developed for the special case of a single relation social network, with blocks specified a priori. An extension of the model allows for tendencies toward reciprocation of ties beyond those explained by the partition. The extended model provides a one degree-of-freedom test of the model. A numerical example from the social network literature is used to illustrate the methods."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | More recently, graph clustering methods that rely on the (NMF) @cite_13 have been proposed e.g., @cite_19 @cite_1 . Their goal is to approximate a symmetric adjacency matrix of a given network by a product of two non-negative, low-rank matrices, such that they have clustering interpretation, i.e., they can be used for assigning nodes to communities. The proposed methods here, follow this line of research, but as opposed to the existing methods @cite_19 @cite_1 , the NF-CCE can effectively handle multiplex networks. | {
"cite_N": [
"@cite_1",
"@cite_19",
"@cite_13"
],
"mid": [
"",
"1504886279",
"2135029798"
],
"abstract": [
"",
"Nonnegative matrix factorization (NMF) provides a lower rank approximation of a nonnegative matrix, and has been successfully used as a clustering method. In this paper, we offer some conceptual understanding for the capabilities and shortcomings of NMF as a clustering method. Then, we propose Symmetric NMF (SymNMF) as a general framework for graph clustering, which inherits the advantages of NMF by enforcing nonnegativity on the clustering assignment matrix. Unlike NMF, however, SymNMF is based on a similarity measure between data points, and factorizes a symmetric matrix containing pairwise similarity values (not necessarily nonnegative). We compare SymNMF with the widely-used spectral clustering methods, and give an intuitive explanation of why SymNMF captures the cluster structure embedded in the graph representation more naturally. In addition, we develop a Newton-like algorithm that exploits second-order information efficiently, so as to show the feasibility of SymNMF as a practical framework for graph clustering. Our experiments on artificial graph data, text data, and image data demonstrate the substantially enhanced clustering quality of SymNMF over spectral clustering and NMF. Therefore, SymNMF is able to achieve better clustering results on both linear and nonlinear manifolds, and serves as a potential basis for many extensions",
"Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. The algorithms can also be interpreted as diagonally rescaled gradient descent, where the rescaling factor is optimally chosen to ensure convergence."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | A multiplex network is a set of @math monoplex networks (or layers), @math , for @math . The number of nodes in each layer is the same, @math , while the connectivity pattern and the distribution of links in each layer differs, @math (see Fig. ). Similarly to monoplex networks, we consider the case where each layer represents a weighted, undirected graph, i.e., @math . A multiplex network can be represented as a set of adjacency matrices encoding connectivity patterns of individual layers, @math , for @math . The goal of community detection in multiplex networks is to infer shared, latent community assignment that best fits all given layers. Given that each layer contains incomplete and complementary information, this process of finding shared communities by integrating information from all layers is also known in the literature as network integration (fusion) @cite_11 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_11"
],
"mid": [
"2095711515",
"1984478556"
],
"abstract": [
"With the pervasive availability of Web 2.0 and social networking sites, people can interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment shared content (bookmark, photos, videos), and users can tag their own favorite content. Users can also connect to each other, and subscribe to or become a fan or a follower of others. These diverse individual activities result in a multi-dimensional network among actors, forming cross-dimension group structures with group members sharing certain similarities. It is challenging to effectively integrate the network information of multiple dimensions in order to discover cross-dimension group structures. In this work, we propose a two-phase strategy to identify the hidden structures shared across dimensions in multi-dimensional networks. We extract structural features from each dimension of the network via modularity analysis, and then integrate them all to find out a robust community structure among actors. Experiments on synthetic and real-world data validate the superiority of our strategy, enabling the analysis of collective behavior underneath diverse individual activities in a large scale.",
"Network has been a general tool for studying the complex interactions between different genes, proteins, and other small molecules. Module as a fundamental property of many biological networks has been widely studied and many computational methods have been proposed to identify the modules in an individual network. However, in many cases, a single network is insufficient for module analysis due to the noise in the data or the tuning of parameters when building the biological network. The availability of a large amount of biological networks makes network integration study possible. By integrating such networks, more informative modules for some specific disease can be derived from the networks constructed from different tissues, and consistent factors for different diseases can be inferred. In this paper, we have developed an effective method for module identification from multiple networks under different conditions. The problem is formulated as an optimization model, which combines the module identification in each individual network and alignment of the modules from different networks together. An approximation algorithm based on eigenvector computation is proposed. Our method outperforms the existing methods, especially when the underlying modules in multiple networks are different in simulation studies. We also applied our method to two groups of gene coexpression networks for humans, which include one for three different cancers, and one for three tissues from the morbidly obese patients. We identified 13 modules with three complete subgraphs, and 11 modules with two complete subgraphs, respectively. The modules were validated through Gene Ontology enrichment and KEGG pathway enrichment analysis. We also showed that the main functions of most modules for the corresponding disease have been addressed by other researchers, which may provide the theoretical basis for further studying the modules experimentally."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | Modularity-based approaches that generalize the notion of modularity from single-layer to multi-layer networks @cite_22 . Namely, to alleviate the above mentioned limitations, the Principal Modularity Maximization (PMM) @cite_35 has been proposed. First, for each layer, PMM extracts structural features by optimizing its modularity, and thus significantly denoising each layer; then, it applies PCA on concatenated matrix of structural feature matrices, to find the principal vectors, followed by K-means to perform clustering assignment. The main drawback of this approach is that it treats structural feature matrices of all layers on equal basis (i.e., it is not capable of distinguishing between more and less informative network layers, or complementary layers). Even though the noise is properly handled by this method, the complementarity aspect cannot be captured well by the integration step. | {
"cite_N": [
"@cite_35",
"@cite_22"
],
"mid": [
"2095711515",
"2074617510"
],
"abstract": [
"With the pervasive availability of Web 2.0 and social networking sites, people can interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment shared content (bookmark, photos, videos), and users can tag their own favorite content. Users can also connect to each other, and subscribe to or become a fan or a follower of others. These diverse individual activities result in a multi-dimensional network among actors, forming cross-dimension group structures with group members sharing certain similarities. It is challenging to effectively integrate the network information of multiple dimensions in order to discover cross-dimension group structures. In this work, we propose a two-phase strategy to identify the hidden structures shared across dimensions in multi-dimensional networks. We extract structural features from each dimension of the network via modularity analysis, and then integrate them all to find out a robust community structure among actors. Experiments on synthetic and real-world data validate the superiority of our strategy, enabling the analysis of collective behavior underneath diverse individual activities in a large scale.",
"Network science is an interdisciplinary endeavor, with methods and applications drawn from across the natural, social, and information sciences. A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities. We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks, which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices. This framework allows studies of community structure in a general setting encompassing networks that evolve over time, have multiple types of links (multiplexity), and have multiple scales."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | Spectral clustering approaches that generalize the eigendecomposition from single to multiple Laplacian matrices representing network layers. One of the state-of-the-art spectral clustering methods for multiplex graphs is the Spectral Clustering on Multi-Layer (SC-ML) @cite_37 . First, for each network layer, SC-ML computes a subspace spanned by the principal eigenvectors of its Laplacian matrix. Then, by interpreting each subspace as a point on Grassmann manifold, SC-ML merges subspaces into a consensus subspace from which the composite clusters are extracted. The biggest drawback of this methods is the underlying spectral clustering, that always finds tight and small-scale and, in some cases, almost trivial communities. For example, SC-ML cannot adequately handle network layers with missing or weak connections, or layers that have disconnected parts. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2059861509"
],
"abstract": [
"Relationships between entities in datasets are often of multiple nature, like geographical distance, social relationships, or common interests among people in a social network, for example. This information can naturally be modeled by a set of weighted and undirected graphs that form a global multi-layer graph, where the common vertex set represents the entities and the edges on different layers capture the similarities of the entities in term of the different modalities. In this paper, we address the problem of analyzing multi-layer graphs and propose methods for clustering the vertices by efficiently merging the information provided by the multiple modalities. To this end, we propose to combine the characteristics of individual graph layers using tools from subspace analysis on a Grassmann manifold. The resulting combination can then be viewed as a low dimensional representation of the original data which preserves the most important information from diverse relationships between entities. As an illustrative application of our framework, we use our algorithm in clustering methods and test its performance on several synthetic and real world datasets where it is shown to be superior to baseline schemes and competitive to state-of-the-art techniques. Our generic framework further extends to numerous analysis and learning problems that involve different types of information on graphs."
]
} |
1612.00750 | 2560757445 | Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains. One of the most important aspect of network analysis is community detection or network clustering. Until recently, the major focus have been on discovering community structure in single (i.e., monoplex) networks. However, with the advent of relational data with multiple modalities, multiplex networks, i.e., networks composed of multiple layers representing different aspects of relations, have emerged. Consequently, community detection in multiplex network, i.e., detecting clusters of nodes shared by all layers, has become a new challenge. In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting composite communities in multiplex networks. Each algorithm works in two steps: first, it finds a non-negative, low-dimensional feature representation of each network layer; then, it fuses the feature representation of layers into a common non-negative, low-dimensional feature representation via collective factorization. The composite clusters are extracted from the common feature representation. We demonstrate the superior performance of our algorithms over the state-of-the-art methods on various types of multiplex networks, including biological, social, economic, citation, phone communication, and brain multiplex networks. | Matrix and tensor factorization-based approaches that utilize collective factorization of adjacency matrices representing network layers. A few matrix and tensor decomposition-based approaches have been proposed so far @cite_7 @cite_54 @cite_56 @cite_4 . Tang @cite_7 introduced the Linked Matrix Factorization (LMF) which fuses information from multiple network layers by factorizing each adjacency matrix into a layer-specific factor and a factor that is common to all network layers. Dong @cite_54 , introduced the Spectral Clustering with Generalized Eigendecomposition (SC-GED) which factorizes Laplacian matrices instead of adjacency matrices. Papalexakis @cite_56 proposed GraphFuse, a method for clustering multi-layer networks based on sparse PARAllel FACtor (PARAFAC) decomposition @cite_46 with non-negativity constraints. Cheng introduced Co-regularized Graph Clustering based on NMF (CGC-NMF). They factorize each adjacency matrix using symmetric NMF while keeping the Euclidean distance between their non-negative low-dimensional representations small. As already pointed out in Section , one of the major limitations of all of these factorization methods is that they treat each network layer on an equal basis and, unlike PMM or SC-ML, for example, they cannot filter out irrelevant information or noise. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_54",
"@cite_56",
"@cite_46"
],
"mid": [
"2019247858",
"2113573459",
"2080161383",
"1655843738",
"2121739212"
],
"abstract": [
"Multi-view graph clustering aims to enhance clustering performance by integrating heterogeneous information collected in different domains. Each domain provides a different view of the data instances. Leveraging cross-domain information has been demonstrated an effective way to achieve better clustering results. Despite the previous success, existing multi-view graph clustering methods usually assume that different views are available for the same set of instances. Thus instances in different domains can be treated as having strict one-to-one relationship. In many real-life applications, however, data instances in one domain may correspond to multiple instances in another domain. Moreover, relationships between instances in different domains may be associated with weights based on prior (partial) knowledge. In this paper, we propose a flexible and robust framework, CGC (Co-regularized Graph Clustering), based on non-negative matrix factorization (NMF), to tackle these challenges. CGC has several advantages over the existing methods. First, it supports many-to-many cross-domain instance relationship. Second, it incorporates weight on cross-domain relationship. Third, it allows partial cross-domain mapping so that graphs in different domains may have different sizes. Finally, it provides users with the extent to which the cross-domain instance relationship violates the in-domain clustering structure, and thus enables users to re-evaluate the consistency of the relationship. Extensive experimental results on UCI benchmark data sets, newsgroup data sets and biological interaction networks demonstrate the effectiveness of our approach.",
"In graph-based learning models, entities are often represented as vertices in an undirected graph with weighted edges describing the relationships between entities. In many real-world applications, however, entities are often associated with relations of different types and or from different sources, which can be well captured by multiple undirected graphs over the same set of vertices. How to exploit such multiple sources of information to make better inferences on entities remains an interesting open problem. In this paper, we focus on the problem of clustering the vertices based on multiple graphs in both unsupervised and semi-supervised settings. As one of our contributions, we propose Linked Matrix Factorization (LMF) as a novel way of fusing information from multiple graph sources. In LMF, each graph is approximated by matrix factorization with a graph-specific factor and a factor common to all graphs, where the common factor provides features for all vertices. Experiments on SIAM journal data show that (1) we can improve the clustering accuracy through fusing multiple sources of information with several models, and (2) LMF yields superior or competitive results compared to other graph-based clustering methods.",
"Observational data usually comes with a multimodal nature, which means that it can be naturally represented by a multi-layer graph whose layers share the same set of vertices (objects) with different edges (pairwise relationships). In this paper, we address the problem of combining different layers of the multi-layer graph for an improved clustering of the vertices compared to using layers independently. We propose two novel methods, which are based on a joint matrix factorization and a graph regularization framework respectively, to efficiently combine the spectrum of the multiple graph layers, namely the eigenvectors of the graph Laplacian matrices. In each case, the resulting combination, which we call a “joint spectrum” of multiple layers, is used for clustering the vertices. We evaluate our approaches by experiments with several real world social network datasets. Results demonstrate the superior or competitive performance of the proposed methods compared to state-of-the-art techniques and common baseline methods, such as co-regularization and summation of information from individual graphs.",
"Given a co-authorship collaboration network, how well can we cluster the participating authors into communities? If we also consider their citation network, based on the same individuals, is it possible to do a better job? In general, given a network with multiple types (or views) of edges (e.g., collaboration, citation, friendship), can community detection and graph clustering benefit? In this work, we propose Multi-CLUS and GraphFuse, two multi-graph clustering techniques powered by Minimum Description Length and Tensor analysis, respectively. We conduct experiments both on real and synthetic networks, evaluating the performance of our approaches. Our results demonstrate higher clustering accuracy than state-of-the-art baselines that do not exploit the multi-view nature of the network data. Finally, we address the fundamental question posed in the title, and provide a comprehensive answer, based on our systematic analysis.",
"Simple structure and other common principles of factor rotation do not in general provide strong grounds for attributing explanatory significance to the factors which they select. In contrast, it is shown that an extension of Cattell's principle of rotation to Proportional Profiles (PP) offers a basis for determining explanatory factors for three-way or higher order multi-mode data. Conceptual models are developed for two basic patterns of multi-mode data variation, systemand object-variation, and PP analysis is found to apply in the system-variation case. Although PP was originally formulated as a principle of rotation to be used with classic two-way factor analysis, it is shown to embody a latent three-mode factor model, which is here made explicit and generalized frown two to N \"parallel occasions\". As originally formulated, PP rotation was restricted to orthogonal factors. The generalized PP model is demonstrated to give unique \"correct\" solutions with oblique, non-simple structure, and even non-linear factor structures. A series of tests, conducted with synthetic data of known factor composition, demonstrate the capabilities of linear and non-linear versions of the model, provide data on the minimal necessary conditions of uniqueness, and reveal the properties of the analysis procedures when these minimal conditions are not fulfilled. In addition, a mathematical proof is presented for the uniqueness of the solution given certain conditions on the data. Three-mode PP factor analysis is applied to a three-way set of real data consisting of the fundamental and first three formant frequencies of 11 persons saying 8 vowels. A unique solution is extracted, consisting of three factors which are highly meaningful and consistent with prior knowledge and theory concerning vowel quality. The relationships between the three-mode PP model and Tucker's multi-modal model, McDonald's non-linear model and Carroll and Chang's multi-dimensional scaling model are explored."
]
} |
1612.01147 | 2962815400 | It has been shown that for a general-valued constraint language Γ the following statements are equivalent: (1) any instance of VCSP(Γ) can be solved to optimality using a constant level of the Sherali-Adams LP hierarchy; (2) any instance of VCSP(Γ) can be solved to optimality using the third level of the Sherali-Adams LP hierarchy; (3) the support of Γ satisfies the "bounded width condition", i.e., it contains weak near-unanimity operations of all arities. We show that if the support of Γ violates the bounded with condition then not only is VCSP(Γ) not solved by a constant level of the Sherali-Adams LP hierarchy but it is also not solved by Ω(n) levels of the Lasserre SDP hierarchy (also known as the sum-of-squares SDP hierarchy). For Γ corresponding to linear equations in an Abelian group, this result follows from existing work on inapproximability of Max-CSPs. By a breakthrough result of Lee, Raghavendra, and Steurer [STOC'15], our result implies that for any Γ whose support violates the bounded width condition no SDP relaxation of polynomial-size solves VCSP(Γ). We establish our result by proving that various reductions preserve exact solvability by the Lasserre SDP hierarchy (up to a constant factor in the level of the hierarchy). Our results hold for general-valued constraint languages, i.e., sets of functions on a fixed finite domain that take on rational or infinite values, and thus also hold in notable special cases of 0, ∞ -valued languages (CSPs), 0, 1 -valued languages (Min-CSPs Max-CSPs), and Q-valued languages (finite-valued CSPs). | We now informally describe the bounded width condition (BWC). A set of operations on a fixed finite domain satisfies the BWC if it contains weak near-unanimity'' operations of all possible arities. An operation is called a weak near-unanimity operation if the value of the operation does not change assuming all the arguments but at most one are the same. A simple example is a ternary majority operation, which satisfies @math for all @math and @math . Polymorphisms @cite_14 are operations that combine satisfying assignments to a CSP instance and produce a new satisfying assignment. We say that a CSP instance @math satisfies the BWC if the clone of polymorphisms of @math satisfies the BWC. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2026753685"
],
"abstract": [
"Many natural combinatorial problems can be expressed as constraint satisfaction problems. This class of problems is known to be NP-complete in general, but certain restrictions on the form of the constraints can ensure tractability. Here we show that any set of relations used to specify the allowed forms of constraints can be associated with a finite universal algebra and we explore how the computational complexity of the corresponding constraint satisfaction problem is connected to the properties of this algebra. Hence, we completely translate the problem of classifying the complexity of restricted constraint satisfaction problems into the language of universal algebra. We introduce a notion of \"tractable algebra,\" and investigate how the tractability of an algebra relates to the tractability of the smaller algebras which may be derived from it, including its subalgebras and homomorphic images. This allows us to reduce significantly the types of algebras which need to be classified. Using our results we also show that if the decision problem associated with a given collection of constraint types can be solved efficiently, then so can the corresponding search problem. We then classify all finite strictly simple surjective algebras with respect to tractability, obtaining a dichotomy theorem which generalizes Schaefer's dichotomy for the generalized satisfiability problem. Finally, we suggest a possible general algebraic criterion for distinguishing the tractable and intractable cases of the constraint satisfaction problem."
]
} |
1612.01147 | 2962815400 | It has been shown that for a general-valued constraint language Γ the following statements are equivalent: (1) any instance of VCSP(Γ) can be solved to optimality using a constant level of the Sherali-Adams LP hierarchy; (2) any instance of VCSP(Γ) can be solved to optimality using the third level of the Sherali-Adams LP hierarchy; (3) the support of Γ satisfies the "bounded width condition", i.e., it contains weak near-unanimity operations of all arities. We show that if the support of Γ violates the bounded with condition then not only is VCSP(Γ) not solved by a constant level of the Sherali-Adams LP hierarchy but it is also not solved by Ω(n) levels of the Lasserre SDP hierarchy (also known as the sum-of-squares SDP hierarchy). For Γ corresponding to linear equations in an Abelian group, this result follows from existing work on inapproximability of Max-CSPs. By a breakthrough result of Lee, Raghavendra, and Steurer [STOC'15], our result implies that for any Γ whose support violates the bounded width condition no SDP relaxation of polynomial-size solves VCSP(Γ). We establish our result by proving that various reductions preserve exact solvability by the Lasserre SDP hierarchy (up to a constant factor in the level of the hierarchy). Our results hold for general-valued constraint languages, i.e., sets of functions on a fixed finite domain that take on rational or infinite values, and thus also hold in notable special cases of 0, ∞ -valued languages (CSPs), 0, 1 -valued languages (Min-CSPs Max-CSPs), and Q-valued languages (finite-valued CSPs). | In an important series of papers @cite_18 @cite_6 @cite_28 @cite_47 it was established that the BWC captures precisely the decision CSPs that are solved by a certain natural local propagation algorithm @cite_43 . | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_6",
"@cite_43",
"@cite_47"
],
"mid": [
"1988961015",
"2531021157",
"2034079307",
"2150339067",
""
],
"abstract": [
"A k-ary weak near-unanimity operation (or k-WNU) on A is an operation that satisfies the equations w(x, . . . x) ≈ x and w(y, x, . . . , x) ≈ w(x, y, . . . , x) ≈ · · · ≈ w(x, x, . . . , x, y) . If an algebra A has a k-NU (or a k-WNU) term operation, we say that A satisfies NU(k) (or WNU(k), respectively). Likewise, a variety is said to satisfy NU(k) (or WNU(k), respectively), it it has a k-variable term satisfying these equations. It has been conjectured that a finite idempotent algebra A has finite relational width if and only if V(A) (the variety generated by A) has meet semi-distributive congruence lattices. The concept of “finite relational width” arises in the theory of complexity of algorithms, in the algebraic study of constraint-satisfaction problems. Actually, there are several different definitions of this concept and it is not known if they are equivalent. One version of the concept and the conjecture mentioned above are due to B. Larose and L. Zadori [10]. The important family of varieties with meet semi-distributive congruence lattices has various known characterizations. There is a characterization by a certain Maltsev condition; also, it is known that a locally finite variety has this property iff it omits congruence covers of types 1 and 2 (defined in the tame congruence theory of D. Hobby, R. McKenzie [6]). E. Kiss showed that a finite idempotent algebra of relational width k must have an m-WNU term operation for every m ≥ k. E. Kiss and M. Valeriote then observed that a finite algebra with a k-WNU term operation, k > 1, must omit congruence covers of type 1. These observations led M. Valeriote to make two conjectures: any locally finite variety omits congruence covers of type 1 iff it satisfies WNU(k) for some k > 1; any locally finite variety has meet semi-distributive congruence lattices if and only if for some k, it satisfies WNU(m) for all m ≥ k. In this paper, we prove both of these conjectures of M. Valeriote. The family of locally finite varieties omitting type 1 is the largest family of locally finite varieties defined by a nontrivial idempotent Maltsev condition. For this",
"In our LICS 2004 paper we introduced an approach to the study of the local structure of finite algebras and relational structures that aims at applications in the Constraint Satisfaction Problem (CSP). This approach involves a graph associated with an algebra @math or a relational structure A, whose vertices are the elements of @math (or A), the edges represent subsets of @math such that the restriction of some term operation of @math is ‘good’ on the subset, that is, act as an operation of one of the 3 types: semilattice, majority, or affine. In this paper we significantly refine and advance this approach. In particular, we prove certain connectivity and rectangularity properties of relations over algebras related to components of the graph connected by semilattice and affine edges. We also prove a result similar to 2-decomposition of relations invariant under a majority operation, only here we do not impose any restrictions on the relation. These results allow us to give a new, somewhat more intuitive proof of the bounded width theorem: the CSP over algebra @math has bounded width if and only if @math does not contain affine edges. Actually, this result shows that bounded width implies width (2,3). We also consider algebras with edges from a restricted set of types. In particular, it can be proved that type restrictions are preserved under the standard algebraic constructions. Finally, we prove that algebras without semilattice edges have few subalgebras of powers, that is, the CSP over such algebras is also polynomial time.",
"Let ( A ) be finite relational structure of finite type, and let CSP ( ( A) ) denote the following decision problem: if ( I ) is a given structure of the same type as ( A ) , is there a homomorphism from ( I ) to ( A )? To each relational structure ( A ) is associated naturally an algebra ( A ) whose structure determines the complexity of the associated decision problem. We investigate those finite algebras arising from CSP’s of so-called bounded width, i.e., for which local consistency algorithms effectively decide the problem. We show that if a CSP has bounded width then the variety generated by the associated algebra omits the Hobby-McKenzie types 1 and 2. This provides a method to prove that certain CSP’s do not have bounded width. We give several applications, answering a question of Nesetřil and Zhu [26], by showing that various graph homomorphism problems do not have bounded width. Feder and Vardi [17] have shown that every CSP is polynomial-time equivalent to the retraction problem for a poset we call the Feder − Vardi poset of the structure. We show that, in the case where the structure has a single relation, if the retraction problem for the Feder-Vardi poset has bounded width then the CSP for the structure also has bounded width. This is used to exhibit a finite order-primal algebra whose variety admits type 2 but omits type 1 (provided P ≠ NP).",
"This paper starts with the project of finding a large subclass of NP which exhibits a dichotomy. The approach is to find this subclass via syntactic prescriptions. While the paper does not achieve this goal, it does isolate a class (of problems specified by) \"monotone monadic SNP without inequality\" which may exhibit this dichotomy. We justify the placing of all these restrictions by showing, essentially using Ladner's theorem, that classes obtained by using only two of the above three restrictions do not show this dichotomy. We then explore the structure of this class. We show that all problems in this class reduce to the seemingly simpler class CSP. We divide CSP into subclasses and try to unify the collection of all known polytime algorithms for CSP problems and extract properties that make CSP problems NP-hard. This is where the second part of the title, \"a study through Datalog and group theory,\" comes in. We present conjectures about this class which would end in showing the dichotomy.",
""
]
} |
1612.01147 | 2962815400 | It has been shown that for a general-valued constraint language Γ the following statements are equivalent: (1) any instance of VCSP(Γ) can be solved to optimality using a constant level of the Sherali-Adams LP hierarchy; (2) any instance of VCSP(Γ) can be solved to optimality using the third level of the Sherali-Adams LP hierarchy; (3) the support of Γ satisfies the "bounded width condition", i.e., it contains weak near-unanimity operations of all arities. We show that if the support of Γ violates the bounded with condition then not only is VCSP(Γ) not solved by a constant level of the Sherali-Adams LP hierarchy but it is also not solved by Ω(n) levels of the Lasserre SDP hierarchy (also known as the sum-of-squares SDP hierarchy). For Γ corresponding to linear equations in an Abelian group, this result follows from existing work on inapproximability of Max-CSPs. By a breakthrough result of Lee, Raghavendra, and Steurer [STOC'15], our result implies that for any Γ whose support violates the bounded width condition no SDP relaxation of polynomial-size solves VCSP(Γ). We establish our result by proving that various reductions preserve exact solvability by the Lasserre SDP hierarchy (up to a constant factor in the level of the hierarchy). Our results hold for general-valued constraint languages, i.e., sets of functions on a fixed finite domain that take on rational or infinite values, and thus also hold in notable special cases of 0, ∞ -valued languages (CSPs), 0, 1 -valued languages (Min-CSPs Max-CSPs), and Q-valued languages (finite-valued CSPs). | In our main result, Theorem , the BWC is required to hold, as in @cite_53 , for the support of the fractional polymorphisms @cite_49 of the general-valued CSPs. Intuitively, fractional polymorphisms of a general-valued CSP instance @math are probability distributions over polymorphisms of @math with some desirable properties. (A formal definition is given in .) This is a natural requirement since polymorphisms do not capture the complexity of general-valued CSPs but the fractional polymorphisms do so @cite_49 @cite_25 . | {
"cite_N": [
"@cite_53",
"@cite_25",
"@cite_49"
],
"mid": [
"2411816729",
"2962951564",
"2071152266"
],
"abstract": [
"We give a precise algebraic characterization of the power of Sherali--Adams relaxations for solvability of valued constraint satisfaction problems (CSPs) to optimality. The condition is that of bounded width, which has already been shown to capture the power of local consistency methods for decision CSPs and the power of semidefinite programming for robust approximation of CSPs. Our characterization has several algorithmic and complexity consequences. On the algorithmic side, we show that several novel and well-known valued constraint languages are tractable via the third level of the Sherali--Adams relaxation. For the known languages, this is a significantly simpler algorithm than those previously obtained. On the complexity side, we obtain a dichotomy theorem for valued constraint languages that can express an injective unary function. This implies a simple proof of the dichotomy theorem for conservative valued constraint languages established by Kolmogorov and Živný [J. ACM, 60 (2013), 10], and also a ...",
"An instance of the Valued Constraint Satisfaction Problem (VCSP) is given by a finite set of variables, a finite domain of labels, and a sum of functions, each function depending on a subset of the variables. Each function can take finite values specifying costs of assignments of labels to its variables or the infinite value, which indicates infeasible assignments. The goal is to find an assignment of labels to the variables that minimizes the sum. We study (assuming that P a#x2260; NP) how the complexity of this very general problem depends on the set of functions allowed in the instances, the so-called constraint language. The case when all allowed functions take values in 0, a#x221E; corresponds to ordinary CSPs, where one deals only with the feasibility issue and there is no optimization. This case is the subject of the Algebraic CSP Dichotomy Conjecture predicting for which constraint languages CSPs are tractable and for which NP-hard. The case when all allowed functions take only finite values corresponds to finite-valued CSP, where the feasibility aspect is trivial and one deals only with the optimization issue. The complexity of finite-valued CSPs was fully classified by Thapper and Zivny. An algebraic necessary condition for tractability of a general-valued CSP with a fixed constraint language was recently given by Kozik and Ochremiak. As our main result, we prove that if a constraint language satisfies this algebraic necessary condition, and the feasibility CSP corresponding to the VCSP with this language is tractable, then the VCSP is tractable. The algorithm is a simple combination of the assumed algorithm for the feasibility CSP and the standard LP relaxation. As a corollary, we obtain that a dichotomy for ordinary CSPs would imply a dichotomy for general-valued CSPs.",
"Discrete optimization problems arise in many different areas and are studied under many different names. In many such problems the quantity to be optimized can be expressed as a sum of functions of a restricted form. Here we present a unifying theory of complexity for problems of this kind. We show that the complexity of a finite-domain discrete optimization problem is determined by certain algebraic properties of the objective function, which we call weighted polymorphisms. We define a Galois connection between sets of rational-valued functions and sets of weighted polymorphisms and show how the closed sets of this Galois connection can be characterized. These results provide a new approach to studying the complexity of discrete optimization. We use this approach to identify certain maximal tractable subproblems of the general problem and hence derive a complete classification of complexity for the Boolean case."
]
} |
1612.01147 | 2962815400 | It has been shown that for a general-valued constraint language Γ the following statements are equivalent: (1) any instance of VCSP(Γ) can be solved to optimality using a constant level of the Sherali-Adams LP hierarchy; (2) any instance of VCSP(Γ) can be solved to optimality using the third level of the Sherali-Adams LP hierarchy; (3) the support of Γ satisfies the "bounded width condition", i.e., it contains weak near-unanimity operations of all arities. We show that if the support of Γ violates the bounded with condition then not only is VCSP(Γ) not solved by a constant level of the Sherali-Adams LP hierarchy but it is also not solved by Ω(n) levels of the Lasserre SDP hierarchy (also known as the sum-of-squares SDP hierarchy). For Γ corresponding to linear equations in an Abelian group, this result follows from existing work on inapproximability of Max-CSPs. By a breakthrough result of Lee, Raghavendra, and Steurer [STOC'15], our result implies that for any Γ whose support violates the bounded width condition no SDP relaxation of polynomial-size solves VCSP(Γ). We establish our result by proving that various reductions preserve exact solvability by the Lasserre SDP hierarchy (up to a constant factor in the level of the hierarchy). Our results hold for general-valued constraint languages, i.e., sets of functions on a fixed finite domain that take on rational or infinite values, and thus also hold in notable special cases of 0, ∞ -valued languages (CSPs), 0, 1 -valued languages (Min-CSPs Max-CSPs), and Q-valued languages (finite-valued CSPs). | The BWC was also shown @cite_9 @cite_31 to capture precisely the Max-CSPs that can be robustly approximated, as conjectured in @cite_24 . This work is similar to ours but different. In particular, Dalmau and Krokhin showed that various reductions preserve robust approximability of equations, and thus showing that Max-CSPs not satisfying the BWC cannot be robustly approximated, assuming P @math NP and relying on H stad's inapproximability results for linear equations @cite_30 . (Barto and Kozik @cite_31 then showed that Max-CSPs satisfying the BWC can be robustly approximated.) However, note that linear equations be solved exactly using Gaussian elimination and thus this result is not applicable in our setting. Our result, on the other hand, shows that various reductions preserve exact solvability of equations by a algorithm (the Lasserre SDP hierarchy) independently of P ,vs. ,NP . Moreover, the pp-definitions and pp-interpretations used in @cite_9 @cite_31 were required to be equality-free. We prove that our reductions are well-behaved without this assumption. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_31",
"@cite_30"
],
"mid": [
"2611404581",
"2151577278",
"2964303025",
"1999032440"
],
"abstract": [
"",
"An algorithm for a constraint satisfaction problem is called robust if it outputs an assignment satisfying at least a (1 − f(e))-fraction of constraints for each (1 − e)-satisfiable instance (i.e., such that at most a e-fraction of constraints needs to be removed to make the instance satisfiable), where f(e) → 0 as e → 0. We establish an algebraic framework for analyzing constraint satisfaction problems admitting an efficient robust algorithm with functions f of a given growth rate. We use this framework to derive hardness results. We also describe three classes of problems admitting an efficient robust algorithm such that f is O(1 log (1 e)), O(e1 k) for some k > 1, and O(e), respectively. Finally, we give a complete classification of robust satisfiability with a given f for the Boolean case.",
"An algorithm for a constraint satisfaction problem is called robust if it outputs an assignment satisfying at least @math -fraction of the constraints given a @math -satisfiable instance, where @math as @math . Guruswami and Zhou conjectured a characterization of constraint languages for which the corresponding constraint satisfaction problem admits an efficient robust algorithm. This paper confirms their conjecture.",
"We prove optimal, up to an arbitrary e > 0, inapproximability results for Max-E k-Sat for k ≥ 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover."
]
} |
1612.00874 | 2559779059 | Perhaps surprisingly, the total electron microscopy (EM) data collected to date is less than a cubic millimeter. Consequently, there is an enormous demand in the materials and biological sciences to image at greater speed and lower dosage, while maintaining resolution. Traditional EM imaging based on homogeneous raster-order scanning severely limits the volume of high-resolution data that can be collected, and presents a fundamental limitation to understanding physical processes such as material deformation, crack propagation, and pyrolysis. We introduce a novel multi-resolution data fusion (MDF) method for super-resolution computational EM. Our method combines innovative data acquisition with novel algorithmic techniques to dramatically improve the resolution volume speed trade-off. The key to our approach is to collect the entire sample at low resolution, while simultaneously collecting a small fraction of data at high resolution. The high-resolution measurements are then used to create a material-specific patch-library that is used within the "plug-and-play" framework to dramatically improve super-resolution of the low-resolution data. We present results using FEI electron microscope data that demonstrate super-resolution factors of 4x, 8x, and 16x, while substantially maintaining high image quality and reducing dosage. | Image interpolation and super-resolution have been widely studied to enable high-quality imaging with fewer measurements, leading to faster and cheaper data acquisition. Spurred by the success of denoising filters like non-local means (NLM) @cite_18 @cite_19 in exploiting non-local redundancies, there have been several efforts to solve the sparse image interpolation problem using patch-based models @cite_29 @cite_11 @cite_21 @cite_15 . Dictionary learning @cite_3 @cite_26 @cite_5 and example-based methods @cite_9 have also been proposed for achieving super-resolution from low-resolution measurements. @cite_6 proposed a steering kernel regression framework @cite_17 to use non-local means to achieve super-resolution. @cite_31 proposed tree-based resolution synthesis using a regression tree as a piece-wise linear approximation to the conditional mean of the high-resolution image given the low-resolution image. More recently, @cite_20 discussed the use of libraries of structures and image self-similarity to improve super-resolution, but these methods involved extensive training. Another training-based approach to super-resolution was proposed by Perez- @cite_25 where they derive a regression-based manifold mapping between low- and high-resolution images. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_31",
"@cite_29",
"@cite_21",
"@cite_9",
"@cite_17",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2131024476",
"",
"2067042811",
"1999905919",
"",
"",
"2006262236",
"",
"1992408872",
"",
"",
"",
"2462651488",
"2952857781",
""
],
"abstract": [
"Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.",
"",
"A new approach toward increasing spatial resolution is required to overcome the limitations of the sensors and optics manufacturing technology. One promising approach is to use signal processing techniques to obtain an high-resolution (HR) image (or sequence) from observed multiple low-resolution (LR) images. Such a resolution enhancement approach has been one of the most active research areas, and it is called super resolution (SR) (or HR) image reconstruction or simply resolution enhancement. In this article, we use the term \"SR image reconstruction\" to refer to a signal processing approach toward resolution enhancement because the term \"super\" in \"super resolution\" represents very well the characteristics of the technique overcoming the inherent resolution limitation of LR imaging systems. The major advantage of the signal processing approach is that it may cost less and the existing LR imaging systems can be still utilized. The SR image reconstruction is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, including medical imaging, satellite imaging, and video applications. The goal of this article is to introduce the concept of SR algorithms to readers who are unfamiliar with this area and to provide a review for experts. To this purpose, we present the technical review of various existing SR methodologies which are often employed. Before presenting the review of existing SR algorithms, we first model the LR image acquisition process.",
"This paper describes a novel inpainting algorithm that is capable of filling in holes in overlapping texture and cartoon image layers. This algorithm is a direct extension of a recently developed sparse-representation-based image decomposition method called MCA (morphological component analysis), designed for the separation of linearly combined texture and cartoon layers in a given image (see [J.-L. Starck, M. Elad, D.L. Donoho, Image decomposition via the combination of sparse representations and a variational approach, IEEE Trans. Image Process. (2004), in press] and [J.-L. Starck, M. Elad, D.L. Donoho, Redundant multiscale transforms and their application for morphological component analysis, Adv. Imag. Electron Phys. (2004) 132]). In this extension, missing pixels fit naturally into the separation framework, producing separate layers as a by-product of the inpainting process. As opposed to the inpainting system proposed by , where image decomposition and filling-in stages were separated as two blocks in an overall system, the new approach considers separation, hole-filling, and denoising as one unified task. We demonstrate the performance of the new approach via several examples.",
"",
"",
"In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples",
"",
"Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.",
"",
"",
"",
"The main challenge in Super Resolution (SR) is to discover the mapping between the low-and high-resolution manifolds of image patches, a complex ill-posed problem which has recently been addressed through piecewise linear regression with promising results. In this paper we present a novel regression-based SR algorithm that benefits from an extended knowledge of the structure of both manifolds. We propose a transform that collapses the 16 variations induced from the dihedral group of transforms (i.e. rotations, vertical and horizontal reflections) and antipodality (i.e. diametrically opposed points in the unitary sphere) into a single primitive. The key idea of our transform is to study the different dihedral elements as a group of symmetries within the high-dimensional manifold. We obtain the respective set of mirror-symmetry axes by means of a frequency analysis of the dihedral elements, and we use them to collapse the redundant variability through a modified symmetry distance. The experimental validation of our algorithm shows the effectiveness of our approach, which obtains competitive quality with a dictionary of as little as 32 atoms (reducing other methods' dictionaries by at least a factor of 32) and further pushing the state-of-the-art with a 1024 atoms dictionary.",
"In this paper we present seven techniques that everybody should know to improve example-based single image super resolution (SR): 1) augmentation of data, 2) use of large dictionaries with efficient search structures, 3) cascading, 4) image self-similarities, 5) back projection refinement, 6) enhanced prediction by consistency check, and 7) context reasoning. We validate our seven techniques on standard SR benchmarks (i.e. Set5, Set14, B100) and methods (i.e. A+, SRCNN, ANR, Zeyde, Yang) and achieve substantial improvements.The techniques are widely applicable and require no changes or only minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method sets new state-of-the-art results outperforming A+ by up to 0.9dB on average PSNR whilst maintaining a low time complexity.",
""
]
} |
1612.00874 | 2559779059 | Perhaps surprisingly, the total electron microscopy (EM) data collected to date is less than a cubic millimeter. Consequently, there is an enormous demand in the materials and biological sciences to image at greater speed and lower dosage, while maintaining resolution. Traditional EM imaging based on homogeneous raster-order scanning severely limits the volume of high-resolution data that can be collected, and presents a fundamental limitation to understanding physical processes such as material deformation, crack propagation, and pyrolysis. We introduce a novel multi-resolution data fusion (MDF) method for super-resolution computational EM. Our method combines innovative data acquisition with novel algorithmic techniques to dramatically improve the resolution volume speed trade-off. The key to our approach is to collect the entire sample at low resolution, while simultaneously collecting a small fraction of data at high resolution. The high-resolution measurements are then used to create a material-specific patch-library that is used within the "plug-and-play" framework to dramatically improve super-resolution of the low-resolution data. We present results using FEI electron microscope data that demonstrate super-resolution factors of 4x, 8x, and 16x, while substantially maintaining high image quality and reducing dosage. | Apart from these specific solutions for image interpolation and super-resolution, @cite_0 proposed a generic Bayesian framework called plug-and-play'' priors (P &P) for incorporating modern denoising algorithms as prior models in a variety of inverse problems such as sparse image interpolation. In this spirit, @cite_2 have adopted the P &P framework to use sparse-coding and dictionary-learning-based denoisers for achieving super-resolution. In any case, no method that we know of currently exists to fuse sparse low-resolution data with dense high-resolution measurements to form full high-resolution images without training. | {
"cite_N": [
"@cite_0",
"@cite_2"
],
"mid": [
"2061926706",
"2512704900"
],
"abstract": [
"Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited, large collections can lead to weeks of around-the-clock imaging time. To increase data collection speed, we propose and demonstrate on an operational SEM a fast method to sparsely sample and reconstruct smooth images. To accurately localize the electron probe position at fast scan rates, we model the dynamics of the scan coils, and use the model to rapidly and accurately visit a randomly selected subset of pixel locations. Images are reconstructed from the undersampled data by compressed sensing inversion using image smoothness as a prior. We report image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes. Our approach is equally applicable to other domains of nanometer microscopy in which the time to position a probe is a limiting factor (e.g., atomic force microscopy), or in which excessive electron doses might otherwise alter the sample being observed (e.g., scanning transmission electron microscopy).",
"Denoising and Super-Resolution are two inverse problems that have been extensively studied. Over the years, these two tasks were treated as two distinct problems that deserve a different algorithmic solution. In this paper we wish to exploit the recently introduced Plug-and-Play Prior (PPP) approach [1] to connect between the two. Using the PPP, we turn leading denoisers into super-resolution solvers. As a case-study we demonstrate this on the NCSR algorithm, which has two variants: one for denoising and one for superresolution. We show that by using the NCSR denoiser, one can get equal or even better results when compared with the NCSR super-resolution."
]
} |
1612.01073 | 2560064111 | We use Hamiltonian Floer theory to recover and generalize a classic rigidity theorem of Ekelend and Lasry. That theorem can be rephrased as an assertion about the existence of multiple closed Reeb orbits for certain tight contact forms on the sphere that are close, in a suitable sense, to the standard contact form. We first generalize this result to Reeb flows of contact forms on prequantization spaces that are suitably close to Boothby-Wang forms. We then establish, under an additional nondegeneracy assumption, the same rigidity phenomenon for Reeb flows on any closed contact manifold. A natural obstruction to obtaining sharp multiplicity results for closed Reeb orbits is the possible existence of fast closed orbits. To complement the existence results established here, we also show that the existence of such fast orbits can not be precluded by any condition which is invariant under contactomorphisms, even for nearby contact forms. | Our proof of Theorem is motivated by the method to obtain cuplength estimates in Floer theory that was introduced by Albers and Momin in @cite_1 and further developed by Albers and Hein in @cite_6 . | {
"cite_N": [
"@cite_1",
"@cite_6"
],
"mid": [
"2137741537",
"2098787298"
],
"abstract": [
"We prove that on a restricted contact type hypersurface the number of leaf-wise intersections is bounded from below by a certain cup-length.",
"The main goal of this paper is to give a unified treatment to many known cuplength estimates with a view towards Floer theory. As the base case, we prove that for C0-perturbations of a function which is Morse–Bott along a closed submanifold, the number of critical points is bounded below in terms of the cuplength of that critical submanifold. As we work with rather general assumptions the proof also applies in a variety of Floer settings. For example, this proves lower bounds (some of which were known) for the number of fixed points of Hamiltonian diffeomorphisms, Hamiltonian chords for Lagrangian submanifolds, translated points of contactomorphisms, and solutions to a Dirac-type equation."
]
} |
1612.01073 | 2560064111 | We use Hamiltonian Floer theory to recover and generalize a classic rigidity theorem of Ekelend and Lasry. That theorem can be rephrased as an assertion about the existence of multiple closed Reeb orbits for certain tight contact forms on the sphere that are close, in a suitable sense, to the standard contact form. We first generalize this result to Reeb flows of contact forms on prequantization spaces that are suitably close to Boothby-Wang forms. We then establish, under an additional nondegeneracy assumption, the same rigidity phenomenon for Reeb flows on any closed contact manifold. A natural obstruction to obtaining sharp multiplicity results for closed Reeb orbits is the possible existence of fast closed orbits. To complement the existence results established here, we also show that the existence of such fast orbits can not be precluded by any condition which is invariant under contactomorphisms, even for nearby contact forms. | Similar ideas to those underlying Theorem were developed by Jean Gutt in his thesis @cite_17 and subsequent paper @cite_26 . The author is grateful to Peter Albers for notifying him of Gutt's thesis when the author spoke of the results presented here at the Lorentz Centre in July 2014. In these works, Gutt shows that positive @math -equivariant symplectic homology can be used as a contact invariant for a certain class of fillable contact manifolds that can be realized as the boundary of Liouville domains. Among the many interesting applications of his theory, Gutt reproves Theorem under the additional strong nondegeneracy assumption, and also proves a result (Theorem 1.6 in @cite_26 ) very similar in content to Theorem here. Happily, besides a shared debt owed to the technical foundations for @math -equivariant Hamiltonian Floer theory laid down by Bourgeois and Oancea, this is essentially the extent of the overlap between the two projects. | {
"cite_N": [
"@cite_26",
"@cite_17"
],
"mid": [
"2244092244",
"1889062234"
],
"abstract": [
"We show that positive @math -equivariant symplectic homology is a contact invariant for a subclass of contact manifolds which are boundaries of Liouville domains. In nice cases, when the set of Conley-Zehnder indices of all good periodic Reeb orbits on the boundary of the Liouville domain is lacunary, the positive @math -equivariant symplectic homology can be computed; it is generated by those orbits. We prove a \"Viterbo functoriality\" property: when one Liouville domain is embedded into an other one, there is a morphism (reversing arrows) between their positive @math -equivariant symplectic homologies and morphisms compose nicely. These properties allow us to give a proof of Ustilovsky's result on the number of non isomorphic contact structures on the spheres @math . They also give a new proof of a Theorem by Ekeland and Lasry on the minimal number of periodic Reeb orbits on some hypersurfaces in @math . We extend this result to some hypersurfaces in some negative line bundles.",
"This thesis deals with the question of the minimal number of distinct periodic Reeb orbits on a contact manifold which is the boundary of a compact symplectic manifold.The positive S1-equivariant symplectic homology is one of the main tools considered in this thesis. It is built from periodic orbits of Hamiltonian vector fields in a symplectic manifold whose boundary is the given contact manifold.Our first result describes the relation between the symplectic homologies of an exact compact symplectic manifold with contact type boundary (also called Liouville domain), and the periodic Reeb orbits on the boundary. We then prove some properties of these homologies. For a Liouville domain embedded into another one, we construct a morphism between their homologies. We study the invariance of the homologies with respect to the choice of the contact form on the boundary.We use the positive S1-equivariant symplectic homology to give a new proof of a Theorem by Ekeland and Lasry about the minimal number of distinct periodic Reeb orbits on some hypersurfaces in R2n. We indicate how it extends to some hypersurfaces in some negative line bundles. We also give a characterisation and a new way to compute the generalized Conley-Zehnder index defined by Robbin and Salamon for any path of symplectic matrices. A tool for this is a new analysis of normal forms for symplectic matrices."
]
} |
1612.00835 | 2952171659 | Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images. | In the last few years, models built on deep convolutional networks have shown promising results @cite_18 @cite_29 @cite_35 @cite_37 @cite_50 . While deep image synthesis methods cannot yet create realistic, high-resolution images they have an implicit ability to generalize that is difficult for data-driven non-parametric methods (e.g. the ability to hallucinate unseen viewpoints of particular chairs based on the appearance changes of other chairs @cite_29 ). Because our visual world is both enormously complex (with appearance depending on viewpoints, materials, attributes, object identity, lighting, etc.) heavy-tailed, non-parametric methods are limited even in the big data'' era. But deep image synthesis methods might implicitly factorize our visual world and thus generalize to situations beyond the training examples. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_37",
"@cite_29",
"@cite_50"
],
"mid": [
"2173520492",
"2099471712",
"",
"1893585201",
"1850742715"
],
"abstract": [
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"",
"We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye."
]
} |
1612.00835 | 2952171659 | Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images. | A common approach to deep image synthesis is to learn a low dimensional latent representation that can later be used to reconstruct an image, e.g. with Variational Autoencoders (VAEs) @cite_37 or Generative Adversarial Networks (GANs) @cite_18 . In general, deep image synthesis can be conditioned on any input vector @cite_36 , such as attributes @cite_6 , 3d viewpoint parameters and object identity @cite_29 , image and desired viewpoint @cite_9 , or grayscale image @cite_16 @cite_19 @cite_2 . | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_36",
"@cite_29",
"@cite_9",
"@cite_6",
"@cite_19",
"@cite_2",
"@cite_16"
],
"mid": [
"",
"2099471712",
"2423557781",
"1893585201",
"",
"2963567641",
"2461158874",
"2950064337",
"2326925005"
],
"abstract": [
"",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.",
"",
"This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.",
"We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.",
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks."
]
} |
1612.00835 | 2952171659 | Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images. | Generative Adversarial Networks (GANs) Among the most promising deep image synthesis techniques are Generative Adversarial Networks (GANs) @cite_18 @cite_35 in which a network attempts to fool a simultaneously trained network that classifies images as real or synthetic. The discriminator discourages the network from producing obviously fake images. In particular, straightforward regression loss for image synthesis often leads to conservative' networks which produce blurry and desaturated outputs which are close to the mean of the data yet perceptually unrealistic. After training, the generator network is able to produce diverse images from a low dimensional latent input space. Although optimizing in this latent space can be used to 'walk' the natural image manifold (e.g. for image editing @cite_25 @cite_45 or network visualization @cite_7 @cite_23 ), the space itself is not semantically well organized -- the particular dimensions of the latent vector do not correspond to semantic attributes although mapping them to an intermediate structure image @cite_28 can give us more insight. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_7",
"@cite_28",
"@cite_45",
"@cite_23",
"@cite_25"
],
"mid": [
"2173520492",
"2099471712",
"2963464195",
"2298992465",
"2951021768",
"",
""
],
"abstract": [
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right---similar to why we study the human brain---and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization, which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network. The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).",
"Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.",
"Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to \"fall off\" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles.",
"",
""
]
} |
1612.00835 | 2952171659 | Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images. | Conditional GANs Instead of synthesizing images from latent vectors, several works explore GANs where the generator is conditioned on more meaningful inputs such as text @cite_1 @cite_27 , low resolution images (super-resolution) @cite_13 @cite_49 , or incomplete images (inpainting) @cite_26 @cite_3 @cite_47 . Conditional GANs have also been used to transform images into different domains such as a product images @cite_20 or different artistic styles @cite_42 . Conditional GANs can also condition the on particular inputs, e.g. @cite_1 condition both the generator and discriminator on an embedding of input text. This effectively makes the discriminator more powerful. In this paper, only our generator is conditioned on input sketches and color strokes leaving the discriminator to discern real vs fake and not to evaluate the appropriateness of an output given the particular input. | {
"cite_N": [
"@cite_26",
"@cite_42",
"@cite_1",
"@cite_3",
"@cite_27",
"@cite_49",
"@cite_47",
"@cite_13",
"@cite_20"
],
"mid": [
"2342877626",
"2951745349",
"2949999304",
"2953318193",
"2530372461",
"",
"2479644247",
"2523714292",
""
],
"abstract": [
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.",
"Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.",
"",
"In this paper, we propose a novel method for image inpainting based on a Deep Convolutional Generative Adversarial Network (DCGAN). We define a loss function consisting of two parts: (1) a contextual loss that preserves similarity between the input corrupted image and the recovered image, and (2) a perceptual loss that ensures a perceptually realistic output image. Given a corrupted image with missing values, we use back-propagation on this loss to map the corrupted image to a smaller latent space. The mapped vector is then passed through the generative model to predict the missing content. The proposed framework is evaluated on the CelebA and SVHN datasets for two challenging inpainting tasks with random 80 corruption and large blocky corruption. Experiments show that our method can successfully predict semantic information in the missing region and achieve pixel-level photorealism, which is impossible by almost all existing methods.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
""
]
} |
1612.00835 | 2952171659 | Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images. | Controlling deep image synthesis Several recent works share our motivation of adding user editable to deep image generation. Examples of control signals include 3d pose of objects @cite_29 , natural language @cite_1 , semantic attributes @cite_6 , semantic segmentation @cite_51 , and object keypoints and bounding box @cite_27 . | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_6",
"@cite_27",
"@cite_51"
],
"mid": [
"1893585201",
"2949999304",
"2963567641",
"2530372461",
"2302243225"
],
"abstract": [
"We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.",
"Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.",
"Convolutional neural networks (CNNs) have proven highly effective at image synthesis and style transfer. For most users, however, using them as tools can be a challenging task due to their unpredictable behavior that goes against common intuitions. This paper introduces a novel concept to augment such generative architectures with semantic annotations, either by manually authoring pixel labels or using existing solutions for semantic segmentation. The result is a content-aware generative algorithm that offers meaningful control over the outcome. Thus, we increase the quality of images generated by avoiding common glitches, make the results look significantly more plausible, and extend the functional range of these algorithms---whether for portraits or landscapes, etc. Applications include semantic style transfer and turning doodles with few colors into masterful paintings!"
]
} |
1612.00835 | 2952171659 | Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images. | The artistic style transfer approach of @cite_34 could also be considered a mechanism to control deep image synthesis. Their method does not learn' transformations end-to-end but instead uses a pre-trained network and optimizes for output images which have deep network feature (content) similar to one input image and deep network feature (style) similar to another input image. The approach does not perform well for transformations which requires the synthesis of realistic detail (e.g. trying to preserve the content' of a sketch and the style' of a photograph). | {
"cite_N": [
"@cite_34"
],
"mid": [
"2475287302"
],
"abstract": [
"Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation."
]
} |
1612.00534 | 2745341046 | Jointly integrating aspect ratio and context has been extensively studied and shown performance improvement in traditional object detection systems such as the DPMs. It, however, has been largely ignored in deep neural network based detection systems. This paper presents a method of integrating a mixture of object models and region-based convolutional networks for accurate object detection. Each mixture component accounts for both object aspect ratio and multi-scale contextual information explicitly: (i) it exploits a mixture of tiling configurations in the RoI pooling to remedy the warping artifacts caused by a single type RoI pooling (e.g., with equally-sized 7 x 7 cells), and to respect the underlying object shapes more; (ii) it "looks from both the inside and the outside of a RoI" by incorporating contextual information at two scales: global context pooled from the whole image and local context pooled from the surrounding of a RoI. To facilitate accurate detection, this paper proposes a multi-stage detection scheme for integrating the mixture of object models, which utilizes the detection results of the model at the previous stage as the proposals for the current in both training and testing. The proposed method is called the aspect ratio and context aware region-based convolutional network (ARC-R-CNN). In experiments, ARC-R-CNN shows very competitive results with Faster R-CNN [41] and R-FCN [10] on two datasets: the PASCAL VOC and the Microsoft COCO. It obtains significantly better mAP performance using high IoU thresholds on both datasets. | For simplicity, we model objects with different shapes by aspect ratios as similar to @cite_6 . Our method effectively relieved the warping artifacts in R-CNN style models. In this paper, our goal is to integrating aspect ratios and context in the region-based model, boosting the performance while not increasing the computation burden much. We will investigate an effective and efficient way to integrate both local and global context. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2168356304"
],
"abstract": [
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function."
]
} |
1612.00500 | 2950714698 | Supervised (pre-)training currently yields state-of-the-art performance for representation learning for visual recognition, yet it comes at the cost of (1) intensive manual annotations and (2) an inherent restriction in the scope of data relevant for learning. In this work, we explore unsupervised feature learning from unlabeled video. We introduce a novel object-centric approach to temporal coherence that encourages similar representations to be learned for object-like regions segmented from nearby frames. Our framework relies on a Siamese-triplet network to train a deep convolutional neural network (CNN) representation. Compared to existing temporal coherence methods, our idea has the advantage of lightweight preprocessing of the unlabeled video (no tracking required) while still being able to extract object-level regions from which to learn invariances. Furthermore, as we show in results on several standard datasets, our method typically achieves substantial accuracy gains over competing unsupervised methods for image classification and retrieval tasks. | Unsupervised feature learning has a rich history and can be traced to seminal work for learning visual representations which are sparse and reconstructive @cite_12 . More recent advances include training a deep belief network by stacking layer-by-layer RBMs @cite_20 and injecting autoencoders @cite_25 . Building on this concept, multi-layer autoencoders are scaled up to large-scale unlabeled data @cite_5 , where it is shown that neurons in high layers of an unsupervised network can have high responses on semantic objects or object parts. Recently, some approaches explore the use of spatial context of images as a source of a (self-)supervisory signal for learning visual representations @cite_11 @cite_29 . In @cite_11 , the learning is driven by position prediction of context patches, while in @cite_29 , the algorithm is driven by context-based pixel prediction in images. | {
"cite_N": [
"@cite_12",
"@cite_29",
"@cite_5",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2105464873",
"2963420272",
"2950789693",
"2110798204",
"2100495367",
"343636949"
],
"abstract": [
"The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. © 1997 Elsevier Science Ltd",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
"We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
"Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations."
]
} |
1612.00500 | 2950714698 | Supervised (pre-)training currently yields state-of-the-art performance for representation learning for visual recognition, yet it comes at the cost of (1) intensive manual annotations and (2) an inherent restriction in the scope of data relevant for learning. In this work, we explore unsupervised feature learning from unlabeled video. We introduce a novel object-centric approach to temporal coherence that encourages similar representations to be learned for object-like regions segmented from nearby frames. Our framework relies on a Siamese-triplet network to train a deep convolutional neural network (CNN) representation. Compared to existing temporal coherence methods, our idea has the advantage of lightweight preprocessing of the unlabeled video (no tracking required) while still being able to extract object-level regions from which to learn invariances. Furthermore, as we show in results on several standard datasets, our method typically achieves substantial accuracy gains over competing unsupervised methods for image classification and retrieval tasks. | Most existing work for learning representations from unlabeled video exploits the concept of . The underlying idea can be traced to the concept of slow feature analysis (SFA) @cite_24 @cite_28 , which proposes to use temporal coherence in a sequential signal as free'' supervision as discussed above. Some methods attempt to learn feature representations of video frames as a whole @cite_19 @cite_16 @cite_22 @cite_31 , while others track local patches to learn a localized representation @cite_13 @cite_18 @cite_15 . Our approach builds on the concept of temporal coherence, with the new twist of learning from localized object-centric regions in video, and without requiring tracking. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_15",
"@cite_28",
"@cite_24",
"@cite_19",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"1699156674",
"",
"",
"2146444479",
"2145038566",
"2285336231",
"2168345951",
"219040644"
],
"abstract": [
"",
"Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.",
"",
"",
"Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.",
"This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks.",
"How can unlabeled video augment visual learning? Existing methods perform \"slow\" feature analysis, encouraging the representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture *how* the visual content changes. We propose to generalize slow feature analysis to \"steady\" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer on tuples of sequential frames from unlabeled video. It encourages feature changes over time to be smooth, i.e., similar to the most recent changes. Using five diverse datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object, scene, and action recognition tasks. We further show that our features learned from unlabeled video can even surpass a standard heavily supervised pretraining approach.",
"We introduce a new type of neural network activation function based on recent physiological rate models for complex cells in visual area V1. A single-hidden-layer neural network of this kind of model achieves 1.50 error on MNIST. We also introduce an existing criterion for learning slow, decorrelated features as a pretraining strategy for image models. This pretraining strategy results in orientation-selective features, similar to the receptive fields of complex cells. With this pretraining, the same single-hidden-layer model achieves 1.34 error, even though the pretraining sample distribution is very different from the fine-tuning distribution. To implement this pretraining strategy, we derive a fast algorithm for online learning of decorrelated features such that each iteration of the algorithm runs in linear time with respect to the number of features.",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation."
]
} |
1612.00500 | 2950714698 | Supervised (pre-)training currently yields state-of-the-art performance for representation learning for visual recognition, yet it comes at the cost of (1) intensive manual annotations and (2) an inherent restriction in the scope of data relevant for learning. In this work, we explore unsupervised feature learning from unlabeled video. We introduce a novel object-centric approach to temporal coherence that encourages similar representations to be learned for object-like regions segmented from nearby frames. Our framework relies on a Siamese-triplet network to train a deep convolutional neural network (CNN) representation. Compared to existing temporal coherence methods, our idea has the advantage of lightweight preprocessing of the unlabeled video (no tracking required) while still being able to extract object-level regions from which to learn invariances. Furthermore, as we show in results on several standard datasets, our method typically achieves substantial accuracy gains over competing unsupervised methods for image classification and retrieval tasks. | Another way to learn a feature embedding from video is by means of a proxy" task, solving which entails learning a good feature embedding. For example, the reconstruction and prediction of a sequence of video frames can serve as the proxy task @cite_21 @cite_2 . The idea is that in order to reconstruct past video frames or predict future frames, good feature representations must be learnt along the way. Ego-motion @cite_9 @cite_0 is another interesting proxy that is recently adopted to learn feature embeddings. Learning the type of ego-motion that corresponds to video frame transformations entails learning good visual features, and thus proprioceptive motor signals can also act as a supervisory signal for feature learning. We offer empirical comparisons to recent such methods, and show our method surpasses them on three challenging datasets. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_21",
"@cite_2"
],
"mid": [
"",
"2951590555",
"2952453038",
"2950091256"
],
"abstract": [
"",
"The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.",
"In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video."
]
} |
1612.00542 | 2560617289 | Mammography is the most widely used method to screen breast cancer. Because of its mostly manual nature, variability in mass appearance, and low signal-to-noise ratio, a significant number of breast masses are missed or misdiagnosed. In this work, we present how Convolutional Neural Networks can be used to directly classify pre-segmented breast masses in mammograms as benign or malignant, using a combination of transfer learning, careful pre-processing and data augmentation to overcome limited training data. We achieve state-of-the-art results on the DDSM dataset, surpassing human performance, and show interpretability of our model. | While medical images differs significantly from natural images, traditional feature engineering techniques from computer vision such as scale-invariant feature transform (SIFT) and histogram of oriented gradients (HOG) have seen use and success when applied to medical images. More recently, deep learning-based approaches using CNNs have begun to achieve impressive performance on medical tasks such as chest pathology identification in X-Ray and CT @cite_13 @cite_20 , and thoraco-abdominal lymph node detection and interstitial lung disease classification @cite_12 . | {
"cite_N": [
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"1570613334",
"2253429366",
""
],
"abstract": [
"In this work, we examine the strength of deep learning approaches for pathology detection in chest radiographs. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of CNN learned from a non-medical dataset to identify different types of pathologies in chest x-rays. We tested our algorithm on a 433 image dataset. The best performance was achieved using CNN and GIST features. We obtained an area under curve (AUC) of 0.87–0.94 for the different pathologies. The results demonstrate the feasibility of detecting pathology in chest x-rays using deep learning approaches based on non-medical learning. This is a first-of-its-kind experiment that shows that Deep learning with ImageNet, a large scale non-medical image database may be a good substitute to domain specific representations, which are yet to be available, for general medical image recognition tasks.",
"Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.",
""
]
} |
1612.00475 | 2560154784 | Due to physiological variation, patients diagnosed with the same condition may exhibit divergent, but related, responses to the same treatments. Hidden Parameter Markov Decision Processes (HiP-MDPs) tackle this transfer-learning problem by embedding these tasks into a low-dimensional space. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modeled independently of the agent's state uncertainty, requiring an unnatural training procedure in which all tasks visited every part of the state space---possible for robots that can be moved to a particular location, impossible for human patients. We update the HiP-MDP framework and extend it to more robustly develop personalized medicine strategies for HIV treatment. | The use of RL (and machine learning, in general) in the development of optimal control policies and decision making strategies in healthcare @cite_17 is gaining significant momentum as methodologies have begun to adequately account for uncertainty and variations in the problem space. There have been notable efforts made in the administration of anesthesia @cite_23 , in personalizing cancer @cite_22 and HIV therapies @cite_7 and in understanding the causality of macro events in diabetes managment @cite_1 . formalized a routine to accommodate multiple sources of uncertainty in batch RL methods to better evaluate the effectiveness of treatments across a subpopulations of patients. We similarly attempt to address and identify the variations across subpopulations in the development treatment policies. We instead, attempt to account for these variations while developing effective treatment policies in an approximate online fashion. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_1",
"@cite_23",
"@cite_17"
],
"mid": [
"",
"2135721773",
"2564513914",
"2103535498",
"2151161180"
],
"abstract": [
"",
"This paper addresses the problem of computing optimal structured treatment interruption strategies for HIV infected patients. We show that reinforcement learning may be useful to extract such strategies directly from clinical data, without the need of an accurate mathematical model of HIV infection dynamics. To support our claims, we report simulation results obtained by running a recently proposed batch-mode reinforcement learning algorithm, known as fitted Q iteration, on numerically generated data.",
"One of the key uses of causes is to explain why things happen. Explanations of specific events, like an individual's heart attack on Monday afternoon or a particular car accident, help assign responsibility and inform our future decisions. Computational methods for causal inference make use of the vast amounts of data collected by individuals to better understand their behavior and improve their health. However, most methods for explanation of specific events have provided theoretical approaches with limited applicability. In contrast we make two main contributions: an algorithm for explanation that calculates the strength of token causes, and an evaluation based on simulated data that enables objective comparison against prior methods and ground truth. We show that the approach finds the correct relationships in classic test cases (causal chains, common cause, and backup causation) and in a realistic scenario (explaining hyperglycemic episodes in a simulation of type 1 diabetes).",
"Clinical research has demonstrated the efficacy of closed-loop control of anesthesia using the bispectral index of the electroencephalogram as the controlled variable. These controllers have evolved to yield patient-specific anesthesia, which is associated with improved patient outcomes. Despite progress, the problem of patient-specific anesthesia remains unsolved. A variety of factors confound good control, including variations in human physiology, imperfect measures of drug effect, and delayed, hysteretic response to drug delivery. Reinforcement learning (RL) appears to be uniquely equipped to overcome these challenges; however, the literature offers no precedent for RL in anesthesia. To begin exploring the role RL might play in improving anesthetic care, we investigated the method's application in the delivery of patient-specific, propofol-induced hypnosis in human volunteers. When compared to performance metrics reported in the anesthesia literature, RL demonstrated patient-specific control marked by improved accuracy and stability. Furthermore, these results suggest that RL may be considered a viable alternative for solving other difficult closed-loop control problems in medicine. More rigorous clinical study, beyond the confines of controlled human volunteer studies, is needed to substantiate these findings.",
"This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia."
]
} |
1612.00475 | 2560154784 | Due to physiological variation, patients diagnosed with the same condition may exhibit divergent, but related, responses to the same treatments. Hidden Parameter Markov Decision Processes (HiP-MDPs) tackle this transfer-learning problem by embedding these tasks into a low-dimensional space. However, the original formulation of HiP-MDP had a critical flaw: the embedding uncertainty was modeled independently of the agent's state uncertainty, requiring an unnatural training procedure in which all tasks visited every part of the state space---possible for robots that can be moved to a particular location, impossible for human patients. We update the HiP-MDP framework and extend it to more robustly develop personalized medicine strategies for HIV treatment. | GPs have increasingly been used to facilitate methods of RL @cite_26 @cite_0 . Recent advances in modeling dynamical systems with GPs have led to more efficient and robust formulations @cite_19 @cite_24 , most particularly in the approximation and simulation of dynamical systems. The HiP-MDP approximates the underlying dynamical system of the task through the training of a Gaussian Process dynamical model @cite_18 @cite_12 where only a small portion of the true system dynamics may be observed as is common in partially observable Markov Decision Processes (POMDP) @cite_10 . In order to facilitate the transfer between task instances we embed a latent, low-dimensional parametrization of the system dynamics with the states. By virtue of the GP @cite_11 @cite_20 , this latent embedding allows the HiP-MDP to infer across similar task instances and provide a better prediction of the currently observed system. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_0",
"@cite_19",
"@cite_24",
"@cite_20",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2095897186",
"2151268438",
"",
"2140135625",
"",
"",
"2168359464",
"",
"2169779569"
],
"abstract": [
"Rich and complex time-series data, such as those generated from engineering systems, financial markets, videos, or neural recordings are now a common feature of modern data analysis. Explaining the phenomena underlying these diverse data sets requires flexible and accurate models. In this paper, we promote Gaussian process dynamical systems as a rich model class that is appropriate for such an analysis. We present a new approximate message-passing algorithm for Bayesian state estimation and inference in Gaussian process dynamical systems, a non-parametric probabilistic generalization of commonly used state-space models. We derive our message-passing algorithm using Expectation Propagation and provide a unifying perspective on message passing in general state-space models. We show that existing Gaussian filters and smoothers appear as special cases within our inference framework, and that these existing approaches can be improved upon using iterated message passing. Using both synthetic and real-world data, we demonstrate that iterated message passing can improve inference in a wide range of tasks in Bayesian state estimation, thus leading to improved predictions and more effective decision making.",
"We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.",
"",
"In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks.",
"",
"",
"In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable MDPs (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a POMDP. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions.",
"",
"In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior's covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be further kernelised leading to 'twin kernel PCA' in which a mapping between feature spaces occurs."
]
} |
1612.00496 | 2950382845 | We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors and sub-category detection. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset. | The classical problem of 6 DoF pose estimation of an object instance from a single 2D image has been considered previously as a purely geometric problem known as the . Several closed form and iterative solutions assuming correspondences between 2D keypoints in the image and a 3D model of the object can be found in @cite_29 and references therein. Other methods focus on constructing 3D models of the object instances and then finding the 3D pose in the image that best matches the model @cite_14 @cite_4 . | {
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_4"
],
"mid": [
"1991544872",
"2059412355",
"1969552659"
],
"abstract": [
"We propose a non-iterative solution to the PnP problem--the estimation of the pose of a calibrated camera from n 3D-to-2D point correspondences--whose computational complexity grows linearly with n. This is in contrast to state-of-the-art methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n?4 and handles properly both planar and non-planar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12×12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closed-form solution can be used to initialize a Gauss-Newton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and real-data.",
"This article introduces a novel representation for three-dimensional (3D) objects in terms of local affine-invariant descriptors of their images and the spatial relationships between the corresponding surface patches. Geometric constraints associated with different views of the same patches under affine projection are combined with a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true 3D affine and Euclidean models from multiple unregistered images, as well as their recognition in photographs taken from arbitrary viewpoints. The proposed approach does not require a separate segmentation stage, and it is applicable to highly cluttered scenes. Modeling and recognition results are presented.",
"We present a novel Object Recognition approach based on affine invariant regions. It actively counters the problems related to the limited repeatability of the region detectors, and the difficulty of matching, in the presence of large amounts of background clutter and particularly challenging viewing conditions. After producing an initial set of matches, the method gradually explores the surrounding image areas, recursively constructing more and more matching regions, increasingly farther from the initial ones. This process covers the object with matches, and simultaneously separates the correct matches from the wrong ones. Hence, recognition and segmentation are achieved at the same time. The approach includes a mechanism for capturing the relationships between multiple model views and exploiting these for integrating the contributions of the views at recognition time. This is based on an efficient algorithm for partitioning a set of region matches into groups lying on smooth surfaces. Integration is achieved by measuring the consistency of configurations of groups arising from different model views. Experimental results demonstrate the stronger power of the approach in dealing with extensive clutter, dominant occlusion, and large scale and viewpoint changes. Non-rigid deformations are explicitly taken into account, and the approximative contours of the object are produced. All presented techniques can extend any view-point invariant feature extractor."
]
} |
1612.00496 | 2950382845 | We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors and sub-category detection. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset. | Recently, deep convolutional neural networks (CNN) have dramatically improved the performance of 2D object detection and several extensions have been proposed to include 3D pose estimation. @cite_2 R-CNN @cite_22 is used to detect objects and the resulting detected regions are passed as input to a pose estimation network. The pose network is initialized with VGG @cite_23 and fine-tuned for pose estimation using ground truth annotations from Pascal 3D+. This approach is similar to @cite_26 , with the distinction of using separate pose weights for each category and a large number of synthetic images with pose annotation ground truth for training. @cite_15 , Poirson al discretize the object viewpoint and train a deep convolutional network to jointly perform viewpoint estimation and 2D detection. The network shares the pose parameter weights across all classes. @cite_2 , Tulsiani al explore the relationship between coarse viewpoint estimation, followed by keypoint detection, localization and pose estimation. @cite_16 , used CNN to localize the keypoints and they used the keypoints and their 3D coordinates from meshes to recover the pose. However, their approach required training data with annotated keypoints. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_16"
],
"mid": [
"1591870335",
"2102605133",
"1686810756",
"2951900634",
"2523096747",
"2949911710"
],
"abstract": [
"Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.",
"For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for slidingwindow detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM.",
"This paper presents a novel approach to estimating the continuous six degree of freedom (6-DoF) pose (3D translation and rotation) of an object from a single RGB image. The approach combines semantic keypoints predicted by a convolutional network (convnet) with a deformable shape model. Unlike prior work, we are agnostic to whether the object is textured or textureless, as the convnet learns the optimal representation from the available training image data. Furthermore, the approach can be applied to instance- and class-based pose recovery. Empirically, we show that the proposed approach can accurately recover the 6-DoF object pose for both instance- and class-based scenarios with a cluttered background. For class-based object pose estimation, state-of-the-art accuracy is shown on the large-scale PASCAL3D+ dataset."
]
} |
1612.00496 | 2950382845 | We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors and sub-category detection. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset. | Several recent methods have explored 3D bounding box detection for driving scenarios and are most closely related to our method. Xiang al @cite_17 @cite_0 cluster the set of possible object poses into viewpoint-dependent subcategories. These subcategories are obtained by clustering 3D voxel patterns introduced previously @cite_8 ; 3D CAD models are required to learn the pattern dictionaries. The subcategories capture both shape, viewpoint and occlusion patterns and are subsequently classified discriminatively @cite_0 using deep CNNs. Another related approach by Chen al @cite_1 addresses the problem by sampling 3D boxes in the physical world assuming the flat ground plane constraint. The boxes are scored using high level contextual, shape and category specific features. All of the above approaches require complicated preprocessing including high level features such as segmentation or 3D shape repositories and may not be suitable for robots with limited computational resources. | {
"cite_N": [
"@cite_0",
"@cite_8",
"@cite_1",
"@cite_17"
],
"mid": [
"2950703487",
"",
"2468368736",
"1946609740"
],
"abstract": [
"In CNN-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state-of-the-art performance on both detection and pose estimation on commonly used benchmarks.",
"",
"The goal of this paper is to perform 3D object detection from a single monocular image in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain highquality object detections. The focus of this paper is on proposal generation. In particular, we propose an energy minimization approach that places object candidates in 3D using the fact that objects should be on the ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials encoding semantic segmentation, contextual information, size and location priors and typical object shape. Our experimental evaluation demonstrates that our object proposal generation approach significantly outperforms all monocular approaches, and achieves the best detection performance on the challenging KITTI benchmark, among published monocular competitors.",
"Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6 in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D."
]
} |
1612.00729 | 2560739164 | Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in real life applications today. However, not much is known about which specific linguistic features are useful for prediction and how much of this is consistent across datasets. This article addresses that by exploring the role of various linguistic features in automatic essay scoring using two publicly available datasets of non-native English essays written in test taking scenarios. The linguistic properties are modeled by encoding lexical, syntactic, discourse and error types of learner language in the feature set. Predictive models are then developed using these features on both datasets and the most predictive features are compared. While the results show that the feature set used results in good predictive models with both datasets, the question ”what are the most predictive features?” has a different answer for each dataset. | Most of the research in AES has been related to English writing owing to its widespread use and the availability of more learner corpora and language processing software for the language. However, the past half-decade saw the emergence of AES research in non-English (primarily European) languages. @cite_16 developed an AES approach for detecting Swedish language proficiency using a corpus of high-school level exams conducted nationwide in Sweden. @cite_17 described a proficiency classification approach for a publicly accessible dataset of German learner essays, based on the CEFR scale used in Europe. @cite_11 @cite_12 developed an approach for automatically predicting Estonian learner proficiency on the CEFR scale, also based on a public dataset. In developing the features, all the above-mentioned approaches relied on the specific properties of the language (e.g., morphology) along with features generally used in English. However, to our knowledge, AES systems developed for non-English languages have not been put to widespread use in any real life application the way English AES systems are being used (yet). | {
"cite_N": [
"@cite_16",
"@cite_12",
"@cite_11",
"@cite_17"
],
"mid": [
"2165608467",
"2336140400",
"2250727337",
""
],
"abstract": [
"We present the first system developed for automated grading of high school essays written in Swedish. The system uses standard text quality indicators and is able to compare vocabulary and grammar to large reference corpora of blog posts and newspaper articles. The system is evaluated on a corpus of 1 702 essays, each graded independently by the student’s own teacher and also in a blind re-grading process by another teacher. We show that our system’s performance is fair, given the low agreement between the two human graders, and furthermore show how it could improve efficiency in a practical setting where one seeks to identify incorrectly graded essays.",
"This paper reports on approaches for automatically predicting a learner’s language proficiency in Estonian according to the European CEFR scale. We used the morphological and POS tag information extracted from the texts written by learners. We compared classification and regression modeling for this task. Our models achieve a classification accuracy of 79 and a correlation of 0.85 when modeled as regression. After a comparison between them, we concluded that classification is more effective than regression in terms of exact error and the direction of error. Apart from this, we investigated the most predictive features for both multiclass and binary classification between groups and also explored the nature of the correlations between highly predictive features. Our results show considerable improvement in classification accuracy over previously reported results and take us a step closer towards the automated assessment of Estonian learner text.",
"We developed an approach to predict the proficiency level of Estonian language learners based on the CEFR guidelines. We performed learner classification by studying morphosyntactic variation and lexical richness in texts produced by learners of Estonian as a second language. We show that our features which exploit the rich morphology of Estonian by focusing on the nominal case and verbal mood are useful predictors for this task. We also show that re-formulating the classification problem as a multi-stage cascaded classification improves the classification accuracy. Finally, we also studied the effect of training data size on classification accuracy and found that more training data is beneficial in only some of the cases.",
""
]
} |
1612.00729 | 2560739164 | Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in real life applications today. However, not much is known about which specific linguistic features are useful for prediction and how much of this is consistent across datasets. This article addresses that by exploring the role of various linguistic features in automatic essay scoring using two publicly available datasets of non-native English essays written in test taking scenarios. The linguistic properties are modeled by encoding lexical, syntactic, discourse and error types of learner language in the feature set. Predictive models are then developed using these features on both datasets and the most predictive features are compared. While the results show that the feature set used results in good predictive models with both datasets, the question ”what are the most predictive features?” has a different answer for each dataset. | In terms of research on publicly available corpora, the current work can compare closely to @cite_8 and @cite_14 , who worked on the First Certificate of English corpus, which is one of the corpora used in this paper. In contrast with the pairwise-ranking approach used in their work, our model uses regression. While we model similar aspects of text as both these papers in our approach, the feature set described in this paper contains fewer, but denser features. It will be shown that our models achieve a comparable performance with the reported results on this dataset. Thus, compared to existing work on AES, this paper reports experiments with a new corpus, uses some new features that were not used in this context before, and compares the feature performance with more than one corpus. To our knowledge, this is the first multi-corpus study of automatic essay scoring task. | {
"cite_N": [
"@cite_14",
"@cite_8"
],
"mid": [
"36556894",
"2124725212"
],
"abstract": [
"To date, few attempts have been made to develop new methods and validate existing ones for automatic evaluation of discourse coherence in the noisy domain of learner texts. We present the first systematic analysis of several methods for assessing coherence under the framework of automated assessment (AA) of learner free-text responses. We examine the predictive power of different coherence models by measuring the effect on performance when combined with an AA system that achieves competitive results, but does not use discourse coherence features, which are also strong indicators of a learner's level of attainment. Additionally, we identify new techniques that outperform previously developed ones and improve on the best published result for AA on a publically-available dataset of English learner free-text examination scripts.",
"We demonstrate how supervised discriminative machine learning techniques can be used to automate the assessment of 'English as a Second or Other Language' (ESOL) examination scripts. In particular, we use rank preference learning to explicitly model the grade relationships between scripts. A number of different features are extracted and ablation tests are used to investigate their contribution to overall performance. A comparison between regression and rank preference models further supports our method. Experimental results on the first publically available dataset show that our system can achieve levels of performance close to the upper bound for the task, as defined by the agreement between human examiners on the same corpus. Finally, using a set of 'outlier' texts, we test the validity of our model and identify cases where the model's scores diverge from that of a human examiner."
]
} |
1612.00215 | 2559991484 | Automatic image synthesis research has been rapidly growing with deep networks getting more and more expressive. In the last couple of years, we have observed images of digits, indoor scenes, birds, chairs, etc. being automatically generated. The expressive power of image generators have also been enhanced by introducing several forms of conditioning variables such as object names, sentences, bounding box and key-point locations. In this work, we propose a novel deep conditional generative adversarial network architecture that takes its strength from the semantic layout and scene attributes integrated as conditioning variables. We show that our architecture is able to generate realistic outdoor scene images under different conditions, e.g. day-night, sunny-foggy, with clear object boundaries. | As an alternative to GANs, variational autoencoders (VAEs) @cite_20 generate an image using a feed-forward convolutional decoder network and during inference the input is passed through the encoder that produces an approximate posterior distribution over the latent variables. The goal is to minimize the Euclidean distance between generated image and the posterior distribution. DRAW @cite_7 architecture combines a pair of recurrent neural networks with the VAE model for reading and writing portions of the image canvas at each time step. Given a single input image, DC-IGN @cite_31 generates new images of the same object with variations in pose and lighting and disCVAE @cite_25 conditions the image generation process with facial attributes. Finally, Pixel CNN @cite_23 and Pixel RNN @cite_19 propose to generate image pixels sequentially. | {
"cite_N": [
"@cite_7",
"@cite_19",
"@cite_23",
"@cite_31",
"@cite_25",
"@cite_20"
],
"mid": [
"1850742715",
"2953318193",
"2423557781",
"2953255770",
"2963567641",
""
],
"abstract": [
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.",
"This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative results of the model's efficacy at learning a 3D rendering engine.",
"This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of generating realistic and diverse samples with disentangled latent representations. We use a general energy minimization algorithm for posterior inference of latent variables given novel images. Therefore, the learned generative models show excellent quantitative and visual results in the tasks of attribute-conditioned image reconstruction and completion.",
""
]
} |
1612.00215 | 2559991484 | Automatic image synthesis research has been rapidly growing with deep networks getting more and more expressive. In the last couple of years, we have observed images of digits, indoor scenes, birds, chairs, etc. being automatically generated. The expressive power of image generators have also been enhanced by introducing several forms of conditioning variables such as object names, sentences, bounding box and key-point locations. In this work, we propose a novel deep conditional generative adversarial network architecture that takes its strength from the semantic layout and scene attributes integrated as conditioning variables. We show that our architecture is able to generate realistic outdoor scene images under different conditions, e.g. day-night, sunny-foggy, with clear object boundaries. | Outdoor Scene Editing. As a high level image editing tool, in @cite_11 , the authors propose to train regressors that can predict the presence of attributes in novel images and develop a method that allows users to manually increase and decrease the effect of transient attributes of several outdoor scenes. They also introduce the Transient Attributes dataset, which includes images collected from the webcams viewing 101 scenes. As an alternative to regressors, a deep convolutional neural network is used in @cite_1 to predict the transient attributes of an outdoor scene. @cite_12 , a framework is presented for insertin new objects such as pedestrians into existing photographs of street scenes. Several outdoor scene datasets such as @cite_32 @cite_2 may provide data to facilitate outdoor scene editing task. Cityscapes dataset @cite_26 is limited to street scenes, collected mostly to facilitate research on self driving cars. In our study, we decided to use ADE20K dataset @cite_32 , which provides dense segmentation of objects from indoor and outdoor scenes. | {
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_32",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"2953139137",
"2404731393",
"2949341804",
"",
"2134921974",
"2083366168"
],
"abstract": [
"Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations; 20000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.",
"We propose the use of deep convolutional neural networks to estimate the transient attributes of a scene from a single image. Transient scene attributes describe both the objective conditions, such as the weather, time of day, and the season, and subjective properties of a scene, such as whether or not the scene seems busy. Recently, convolutional neural networks have been used to achieve state-of-the-art results for many vision problems, from object detection to scene classification, but have not previously been used for estimating transient attributes. We compare several methods for adapting an existing network architecture and present state-of-the-art results on two benchmark datasets. Our method is more accurate and significantly faster than previous methods, enabling real-world applications.",
"Scene parsing, or recognizing and segmenting objects and stuff in an image, is one of the key problems in computer vision. Despite the community's efforts in data collection, there are still few image datasets covering a wide range of scenes and object categories with dense and detailed annotations for scene parsing. In this paper, we introduce and analyze the ADE20K dataset, spanning diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. A generic network design called Cascade Segmentation Module is then proposed to enable the segmentation networks to parse a scene into stuff, objects, and object parts in a cascade. We evaluate the proposed module integrated within two existing semantic segmentation networks, yielding significant improvements for scene parsing. We further show that the scene parsing networks trained on ADE20K can be applied to a wide variety of scenes and objects.",
"",
"We present a system for inserting new objects into existing photographs by querying a vast image-based object library, pre-computed using a publicly available Internet object database. The central goal is to shield the user from all of the arduous tasks typically involved in image compositing. The user is only asked to do two simple things: 1) pick a 3D location in the scene to place a new object; 2) select an object to insert using a hierarchical menu. We pose the problem of object insertion as a data-driven, 3D-based, context-sensitive object retrieval task. Instead of trying to manipulate the object to change its orientation, color distribution, etc. to fit the new image, we simply retrieve an object of a specified class that has all the required properties (camera pose, lighting, resolution, etc) from our large object library. We present new automatic algorithms for improving object segmentation and blending, estimating true 3D object size and orientation, and estimating scene lighting conditions. We also present an intuitive user interface that makes object insertion fast and simple even for the artistically challenged.",
"We live in a dynamic visual world where the appearance of scenes changes dramatically from hour to hour or season to season. In this work we study \"transient scene attributes\" -- high level properties which affect scene appearance, such as \"snow\", \"autumn\", \"dusk\", \"fog\". We define 40 transient attributes and use crowdsourcing to annotate thousands of images from 101 webcams. We use this \"transient attribute database\" to train regressors that can predict the presence of attributes in novel images. We demonstrate a photo organization method based on predicted attributes. Finally we propose a high-level image editing method which allows a user to adjust the attributes of a scene, e.g. change a scene to be \"snowy\" or \"sunset\". To support attribute manipulation we introduce a novel appearance transfer technique which is simple and fast yet competitive with the state-of-the-art. We show that we can convincingly modify many transient attributes in outdoor scenes."
]
} |
1612.00330 | 2095062479 | Context: Today's safety critical systems are increasingly reliant on software. Software becomes responsible for most of the critical functions of systems. Many different safety analysis techniques have been developed to identify hazards of systems. FTA and FMEA are most commonly used by safety analysts. Recently, STPA has been proposed with the goal to better cope with complex systems including software. Objective: This research aimed at comparing quantitatively these three safety analysis techniques with regard to their effectiveness, applicability, understandability, ease of use and efficiency in identifying software safety requirements at the system level. Method: We conducted a controlled experiment with 21 master and bachelor students applying these three techniques to three safety-critical systems: train door control, anti-lock braking and traffic collision and avoidance. Results: The results showed that there is no statistically significant difference between these techniques in terms of applicability, understandability and ease of use, but a significant difference in terms of effectiveness and efficiency is obtained. Conclusion: We conclude that STPA seems to be an effective method to identify software safety requirements at the system level. In particular, STPA addresses more different software safety requirements than the traditional techniques FTA and FMEA, but STPA needs more time to carry out by safety analysts with little or no prior experience. | @cite_6 conducted a controlled experiment and its replication to compare two safety analysis methods: Component Integrated Fault Trees (CFT) and Fault Tree (FT) with regard to the capabilities of the safety analysis methods (such as quality and the results) and to the participants' rating of the consistency, clarity and maintainability of these methods. The experiment was carried out with seven academic staff members working towards their PhD and then replicated with eleven domain experts from industry. The result showed that the CFT has potential of being beneficial for employees with little or no experience in fault tree analysis. CFT can be beneficial for companies looking for a safety analysis approach for a project using model-based development. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1979917213"
],
"abstract": [
"(Background) Empirical Software Engineering (SE) strives to provide empirical evidence about the pros and cons of SE approaches. This kind of knowledge becomes relevant when the issue is whether to change from a currently employed approach to a new one or not. An informed decision is required and is particularly important in the development of safety-critical systems. For example, for the safety analysis of safety-critical embedded systems, methods such as Failure Mode and Effect Analysis (FMEA) and Fault Tree Analysis (FTA) are used. With the advent of model-based systems and software development, the question arises whether safety engineering methods should also be adopted. New technologies such as Component Integrated Fault Trees (CFT) come into play. Industry demands to know the benefits of these new methods over established ones such as Fault Trees (FT). (Methods) For the purpose of comparing CFT and FT with regard to the capabilities of the safety analysis methods (such as quality of the results) and to the participants' rating of the consistency, clarity, and maintainability of the methods, we designed a comparative study as a controlled experiment using a within-subject design. The experiment was run with seven academic staff members working towards their PhD. The study was replicated with eleven domain experts from industry. (Results) Although the analysis of the tasks' solutions showed that the use of CFT did not yield a significantly different number of correct or incorrect solutions, the participants rated the modeling capacities of CFT higher in terms of model consistency, clarity, and maintainability. (Conclusion) From this first evidence, we conclude that CFT have the potential of being beneficial for companies looking for a safety analysis approachfor projects using model-based development."
]
} |
1612.00330 | 2095062479 | Context: Today's safety critical systems are increasingly reliant on software. Software becomes responsible for most of the critical functions of systems. Many different safety analysis techniques have been developed to identify hazards of systems. FTA and FMEA are most commonly used by safety analysts. Recently, STPA has been proposed with the goal to better cope with complex systems including software. Objective: This research aimed at comparing quantitatively these three safety analysis techniques with regard to their effectiveness, applicability, understandability, ease of use and efficiency in identifying software safety requirements at the system level. Method: We conducted a controlled experiment with 21 master and bachelor students applying these three techniques to three safety-critical systems: train door control, anti-lock braking and traffic collision and avoidance. Results: The results showed that there is no statistically significant difference between these techniques in terms of applicability, understandability and ease of use, but a significant difference in terms of effectiveness and efficiency is obtained. Conclusion: We conclude that STPA seems to be an effective method to identify software safety requirements at the system level. In particular, STPA addresses more different software safety requirements than the traditional techniques FTA and FMEA, but STPA needs more time to carry out by safety analysts with little or no prior experience. | @cite_13 conducted two controlled experiments to compare fault-tree based safety analysis techniques: State Event Fault Tree analysis (SEFT) @math Dynamic Fault Tree (DFT) and SEFT @math Fault Tree combined with Markov Chains Analysis (MC). The two controlled experiments were conducted as a part of two lectures with students and researchers as participants. The first experiment (SEFT @math DFT) was run with eight students and six researchers (14 subjects). The second experiment (SEFT @math FT @math MC) was conducted with twenty-seven students. The results showed that the subjects found DFT more applicable than SEFT and SEFT more applicable than FT+MC. Also, the subjects needed less time to perform DFT or FT+MC than to perform SEFT. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2041968447"
],
"abstract": [
"The capability to model dynamic aspects of safety-critical systems, such as sequence or stochastic dependence of events, is one important requirement for safety analysis techniques. State Event Fault Tree Analysis, Dynamic Fault Tree Analyis, and Fault Tree Analysis combined with Markov Chains Analysis have been developed to fulfill these requirements, but they are still not widely accepted and used in practice. In order to investigate the reasons behind this low usage, we conducted two controlled experiments. The goal of the experiments was to analyze and compare applicability and efficiency in State Event Fault Tree analysis versus Dynamic Fault Tree Analyis and Fault Tree Analysis combined with Markov Chains Analysis. The results of both experiments show that, notwithstanding the power of State Event Fault Tree Analysis, Dynamic Fault Tree Analyis is rated by participants as more applicable and is more efficient compared to State Event Fault Tree Analysis, which, in turn, is rated as more applicable but is less efficient than Fault Tree Analysis combined with Markov Chains Analysis. Two of the reasons investigated are the complexity of the notations used and the lack of tool support. Based on these results, we suggest strategies for enhancing State Event Fault Tree Analysis to overcome its weaknesses and increase its applicability and efficiency in modeling dynamic aspects of safety-critical systems."
]
} |
1612.00212 | 2560722793 | Fully convolutional neural networks give accurate, per-pixel prediction for input images and have applications like semantic segmentation. However, a typical FCN usually requires lots of floating point computation and large run-time memory, which effectively limits its usability. We propose a method to train Bit Fully Convolution Network (BFCN), a fully convolutional neural network that has low bit-width weights and activations. Because most of its computation-intensive convolutions are accomplished between low bit-width numbers, a BFCN can be accelerated by an efficient bit-convolution implementation. On CPU, the dot product operation between two bit vectors can be reduced to bitwise operations and popcounts, which can offer much higher throughput than 32-bit multiplications and additions. To validate the effectiveness of BFCN, we conduct experiments on the PASCAL VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1 resources on FPGA, can achieve comparable performance as the 32-bit counterpart. | Semantic segmentation helps computer to understand the structure of images, and usually serves as a basis of other computer vision applications. Recent state-of-the-art networks for semantic segmentation are mostly fully convolutional networks @cite_16 and adopt the architecture of encoder-decoder with multi-stage refinement @cite_7 . In order to achieve best performance, powerful classification models are often embedded as part of the FCNs, which pushes up computational complexity together with large decoders. | {
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"2953139137",
"360623563"
],
"abstract": [
"Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations; 20000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.",
"We propose a novel deep architecture, SegNet, for semantic pixel wise image labelling. SegNet has several attractive properties; (i) it only requires forward evaluation of a fully learnt function to obtain smooth label predictions, (ii) with increasing depth, a larger context is considered for pixel labelling which improves accuracy, and (iii) it is easy to visualise the effect of feature activation(s) in the pixel label space at any depth. SegNet is composed of a stack of encoders followed by a corresponding decoder stack which feeds into a soft-max classification layer. The decoders help map low resolution feature maps at the output of the encoder stack to full input image size feature maps. This addresses an important drawback of recent deep learning approaches which have adopted networks designed for object categorization for pixel wise labelling. These methods lack a mechanism to map deep layer feature maps to input dimensions. They resort to ad hoc methods to upsample features, e.g. by replication. This results in noisy predictions and also restricts the number of pooling layers in order to avoid too much upsampling and thus reduces spatial context. SegNet overcomes these problems by learning to map encoder outputs to image pixel labels. We test the performance of SegNet on outdoor RGB scenes from CamVid, KITTI and indoor scenes from the NYU dataset. Our results show that SegNet achieves state-of-the-art performance even without use of additional cues such as depth, video frames or post-processing with CRF models."
]
} |
1612.00212 | 2560722793 | Fully convolutional neural networks give accurate, per-pixel prediction for input images and have applications like semantic segmentation. However, a typical FCN usually requires lots of floating point computation and large run-time memory, which effectively limits its usability. We propose a method to train Bit Fully Convolution Network (BFCN), a fully convolutional neural network that has low bit-width weights and activations. Because most of its computation-intensive convolutions are accomplished between low bit-width numbers, a BFCN can be accelerated by an efficient bit-convolution implementation. On CPU, the dot product operation between two bit vectors can be reduced to bitwise operations and popcounts, which can offer much higher throughput than 32-bit multiplications and additions. To validate the effectiveness of BFCN, we conduct experiments on the PASCAL VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1 resources on FPGA, can achieve comparable performance as the 32-bit counterpart. | To further refine the results from neural networks, CRFs are widely used in post-processing to improve local predictions @cite_4 by reconstructing boundaries more accurately. Since CRF can be integrated with most methods as post-processing step, which contributes little to our main topic, it will not be discussed in this paper. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1923697677"
],
"abstract": [
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU."
]
} |
1612.00212 | 2560722793 | Fully convolutional neural networks give accurate, per-pixel prediction for input images and have applications like semantic segmentation. However, a typical FCN usually requires lots of floating point computation and large run-time memory, which effectively limits its usability. We propose a method to train Bit Fully Convolution Network (BFCN), a fully convolutional neural network that has low bit-width weights and activations. Because most of its computation-intensive convolutions are accomplished between low bit-width numbers, a BFCN can be accelerated by an efficient bit-convolution implementation. On CPU, the dot product operation between two bit vectors can be reduced to bitwise operations and popcounts, which can offer much higher throughput than 32-bit multiplications and additions. To validate the effectiveness of BFCN, we conduct experiments on the PASCAL VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1 resources on FPGA, can achieve comparable performance as the 32-bit counterpart. | Recent success of residual network has shown that very deep networks can be trained efficiently and performs better than any other previous network. There also exists successful attempt @cite_9 to combine FCN and ResNet, which achieves considerable improvement in semantic segmentation. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2340017589"
],
"abstract": [
"We propose a method for high-performance semantic image segmentation (or semantic pixel labelling) based on very deep residual networks, which achieves the state-of-the-art performance. A few design factors are carefully considered to this end. We make the following contributions. (i) First, we evaluate different variations of a fully convolutional residual network so as to find the best configuration, including the number of layers, the resolution of feature maps, and the size of field-of-view. Our experiments show that further enlarging the field-of-view and increasing the resolution of feature maps are typically beneficial, which however inevitably leads to a higher demand for GPU memories. To walk around the limitation, we propose a new method to simulate a high resolution network with a low resolution network, which can be applied during training and or testing. (ii) Second, we propose an online bootstrapping method for training. We demonstrate that online bootstrapping is critically important for achieving good accuracy. (iii) Third we apply the traditional dropout to some of the residual blocks, which further improves the performance. (iv) Finally, our method achieves the currently best mean intersection-over-union 78.3 on the PASCAL VOC 2012 dataset, as well as on the recent dataset Cityscapes. ∗This research was in part supported by the Data to Decisions Cooperative Research Centre. C. Shen’s participation was in part supported by an ARC Future Fellowship (FT120100969). C. Shen is the corresponding author. 1 ar X iv :1 60 4. 04 33 9v 1 [ cs .C V ] 1 5 A pr 2 01 6"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.