aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1709.08571 | 2758197809 | The Hessian-vector product has been utilized to find a second-order stationary solution with strong complexity guarantee (e.g., almost linear time complexity in the problem's dimensionality). In this paper, we propose to further reduce the number of Hessian-vector products for faster non-convex optimization. Previous algorithms need to approximate the smallest eigen-value with a sufficient precision (e.g., @math ) in order to achieve a sufficiently accurate second-order stationary solution (i.e., @math . In contrast, the proposed algorithms only need to compute the smallest eigen-vector approximating the corresponding eigen-value up to a small power of current gradient's norm. As a result, it can dramatically reduce the number of Hessian-vector products during the course of optimization before reaching first-order stationary points (e.g., saddle points). The key building block of the proposed algorithms is a novel updating step named the NCG step, which lets a noisy negative curvature descent compete with the gradient descent. We show that the worst-case time complexity of the proposed algorithms with their favorable prescribed accuracy requirements can match the best in literature for achieving a second-order stationary point but with an arguably smaller per-iteration cost. We also show that the proposed algorithms can benefit from inexact Hessian by developing their variants accepting inexact Hessian under a mild condition for achieving the same goal. Moreover, we develop a stochastic algorithm for a finite or infinite sum non-convex optimization problem. To the best of our knowledge, the proposed stochastic algorithm is the first one that converges to a second-order stationary point in high probability with a time complexity independent of the sample size and almost linear in dimensionality. | There are less work on first-order methods for non-convex optimization with second-order convergence guarantee. Noisy stochastic gradient methods can provide such guarantee by adding noise into the stochastic gradient . However, their time complexity has a polynomial factor of the problem's dimensionality. Recently, @cite_5 proposed a noisy gradient method that adds noise into the iterate for evaluating the gradient to escape from the saddle point. To achieve an @math -second-order stationary point, the noisy gradient method requires @math gradient evaluations. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2592651140"
],
"abstract": [
"This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends only poly-logarithmically on dimension (i.e., it is almost \"dimension-free\"). The convergence rate of this procedure matches the well-known convergence rate of gradient descent to first-order stationary points, up to log factors. When all saddle points are non-degenerate, all second-order stationary points are local minima, and our result thus shows that perturbed gradient descent can escape saddle points almost for free. Our results can be directly applied to many machine learning applications, including deep learning. As a particular concrete example of such an application, we show that our results can be used directly to establish sharp global convergence rates for matrix factorization. Our results rely on a novel characterization of the geometry around saddle points, which may be of independent interest to the non-convex optimization community."
]
} |
1709.08421 | 2759888401 | Automatically generating a summary of sports video poses the challenge of detecting interesting moments, or highlights, of a game. Traditional sports video summarization methods leverage editing conventions of broadcast sports video that facilitate the extraction of high-level semantics. However, user-generated videos are not edited, and thus traditional methods are not suitable to generate a summary. In order to solve this problem, this work proposes a novel video summarization method that uses players' actions as a cue to determine the highlights of the original video. A deep neural network-based approach is used to extract two types of action-related features and to classify video segments into interesting or uninteresting parts. The proposed method can be applied to any sports in which games consist of a succession of actions. Especially, this work considers the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. The method is trained using Kendo videos with ground truth labels that indicate the video highlights. The labels are provided by annotators possessing different experience with respect to Kendo to demonstrate how the proposed method adapts to different needs. The performance of the proposed method is compared with several combinations of different features, and the results show that it outperforms previous summarization methods. | Body-joint features are widely used for human action recognition, because of their rich representability of human motion and their robustness to variability in human appearance @cite_10 . However, they miss potential cues contained in the appearance of the scene. Holistic features, which focus more on the global appearance of the scene, have been also hand-crafted for action recognition @cite_1 ; from motion energy-images to silhouette-based images @cite_31 @cite_5 . As shown in recent works @cite_17 @cite_40 @cite_58 @cite_18 , convolutional neural networks (CNN) have outperformed traditional methods as they are able to extract holistic action recognition features that are more reliable and generalizable than hand-crafted features. An example corresponds to three-dimensional convolutional neural networks (3D CNNs) that constitute an extension of CNNs applied to images (2D CNNs). While 2D CNNs perform only spatial operations in a single image, 3D CNNs also perform temporal operations while preserving temporal dependencies among the input video frames @cite_17 . @cite_40 used a 3D CNN with independent subspace analysis (CNN-ISA) and a support vector machine (SVM) to recognize human actions from video. Additionally, @cite_17 designed a CNN called C3D to extract video features that were subsequently fed to an SVM for action recognition. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_40",
"@cite_5",
"@cite_31",
"@cite_58",
"@cite_10",
"@cite_17"
],
"mid": [
"2952005526",
"2140223865",
"1999192586",
"2139357795",
"2103290524",
"2952186347",
"2072028093",
"2952633803"
],
"abstract": [
"Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping these with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.",
"In this paper we propose a unified action recognition framework fusing local descriptors and holistic features. The motivation is that the local descriptors and holistic features emphasize different aspects of actions and are suitable for the different types of action databases. The proposed unified framework is based on frame differencing, bag-of-words and feature fusion. We extract two kinds of local descriptors, i.e. 2D and 3D SIFT feature descriptors, both based on 2D SIFT interest points. We apply Zernike moments to extract two kinds of holistic features, one is based on single frames and the other is based on motion energy image. We perform action recognition experiments on the KTH and Weizmann databases, using Support Vector Machines. We apply the leave-one-out and pseudo leave-N-out setups, and compare our proposed approach with state-of-the-art results. Experiments show that our proposed approach is effective. Compared with other approaches our approach is more robust, more versatile, easier to compute and simpler to understand.",
"Previous work on action recognition has focused on adapting hand-designed local features, such as SIFT or HOG, from static images to the video domain. In this paper, we propose using unsupervised feature learning as a way to learn features directly from video data. More specifically, we present an extension of the Independent Subspace Analysis algorithm to learn invariant spatio-temporal features from unlabeled video data. We discovered that, despite its simplicity, this method performs surprisingly well when combined with deep learning techniques such as stacking and convolution to learn hierarchical representations. By replacing hand-designed features with our learned features, we achieve classification results superior to all previous published results on the Hollywood2, UCF, KTH and YouTube action recognition datasets. On the challenging Hollywood2 and YouTube action datasets we obtain 53.3 and 75.8 respectively, which are approximately 5 better than the current best published results. Further benefits of this method, such as the ease of training and the efficiency of training and prediction, will also be discussed. You can download our code and learned spatio-temporal features here: http: ai.stanford.edu ∼wzou",
"Recognizing different actions with a unique approach can be a difficult task. This paper proposes a novel holistic representation of actions that we called \"action signature\". This 1D trajectory is obtained by parsing the 2D image containing the orientations of the gradient calculated on the motion feature map called motion-history image. In this way, the trajectory is a sketch representation of how the object motion varies in time. A robust statistical framework based on mixtures of von Mises distributions and dynamic programming for sequence alignment are used to compare and classify actions trajectories. The experimental results show a rather high accuracy in distinguishing quite complicated actions, such as drinking, jumping, or abandoning an object.",
"A framework for human action modeling and recognition in continuous action sequences is proposed. A star figure enclosed by a bounding convex polygon is used to effectively represent the extremities of the silhouette of a human body. Thus, human actions are recorded as a sequence of the star figure's parameters, which is then used for action modeling. To model human actions in a compact manner while characterizing their spatio-temporal patterns, star figure parameters are represented by a 2-D feature map, which is used and regarded as a spatio-temporal template. Experiments to evaluate the performance of the proposed framework show that it can recognize human actions in an efficient and effective manner.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Gesture recognition is essential for human — machine interaction. In this paper we propose a method to recognize human gestures using a Kinect® depth camera. The camera views the subject in the front plane and generates a depth image of the subject in the plane towards the camera. This depth image is then used for background removal, followed by generation of the depth profile of the subject. In addition to this, the difference between subsequent frames gives the motion profile of the subject and is used for recognition of gestures. These allow the efficient use of depth camera to successfully recognize multiple human gestures. The result of a case study involving 8 gestures is shown. The system was trained using a multi class Support Vector Machine.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use."
]
} |
1709.08421 | 2759888401 | Automatically generating a summary of sports video poses the challenge of detecting interesting moments, or highlights, of a game. Traditional sports video summarization methods leverage editing conventions of broadcast sports video that facilitate the extraction of high-level semantics. However, user-generated videos are not edited, and thus traditional methods are not suitable to generate a summary. In order to solve this problem, this work proposes a novel video summarization method that uses players' actions as a cue to determine the highlights of the original video. A deep neural network-based approach is used to extract two types of action-related features and to classify video segments into interesting or uninteresting parts. The proposed method can be applied to any sports in which games consist of a succession of actions. Especially, this work considers the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. The method is trained using Kendo videos with ground truth labels that indicate the video highlights. The labels are provided by annotators possessing different experience with respect to Kendo to demonstrate how the proposed method adapts to different needs. The performance of the proposed method is compared with several combinations of different features, and the results show that it outperforms previous summarization methods. | Another state-of-the-art CNN-based action recognition method employed two types of streams, namely a spatial stream and a temporal stream @cite_58 @cite_18 . Videos are decomposed into spatial and temporal components, i.e., into an RGB and optical flow representation of its frames, and fed into two separate 3D CNNs. Each stream separately provides a score for each possible action, and the scores from two streams were later combined to obtain a final decision. This architecture is supported by the two-stream hypothesis of neuroscience in which the human visual system is composed of two different streams in the brain, namely the dorsal stream (spatial awareness and guidance of actions) and the ventral stream (object recognition and form representation) @cite_22 . | {
"cite_N": [
"@cite_18",
"@cite_58",
"@cite_22"
],
"mid": [
"2952005526",
"2952186347",
"2082627290"
],
"abstract": [
"Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping these with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Abstract Accumulating neuropsychological, electrophysiological and behavioural evidence suggests that the neural substrates of visual perception may be quite distinct from those underlying the visual control of actions. In other words, the set of object descriptions that permit identification and recognition may be computed independently of the set of descriptions that allow an observer to shape the hand appropriately to pick up an object. We propose that the ventral stream of projections from the striate cortex to the inferotemporal cortex plays the major role in the perceptual identification of objects, while the dorsal stream projecting from the striate cortex to the posterior pariet al region mediates the required sensorimotor transformations for visually guided actions directed at such objects."
]
} |
1709.08421 | 2759888401 | Automatically generating a summary of sports video poses the challenge of detecting interesting moments, or highlights, of a game. Traditional sports video summarization methods leverage editing conventions of broadcast sports video that facilitate the extraction of high-level semantics. However, user-generated videos are not edited, and thus traditional methods are not suitable to generate a summary. In order to solve this problem, this work proposes a novel video summarization method that uses players' actions as a cue to determine the highlights of the original video. A deep neural network-based approach is used to extract two types of action-related features and to classify video segments into interesting or uninteresting parts. The proposed method can be applied to any sports in which games consist of a succession of actions. Especially, this work considers the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. The method is trained using Kendo videos with ground truth labels that indicate the video highlights. The labels are provided by annotators possessing different experience with respect to Kendo to demonstrate how the proposed method adapts to different needs. The performance of the proposed method is compared with several combinations of different features, and the results show that it outperforms previous summarization methods. | In addition to RGB videos, other methods leverage depth maps obtained from commodity depth sensors (e.g. Microsoft Kinect) to estimate the human 3D pose for action recognition @cite_57 @cite_28 @cite_15 . The third dimension provides robustness to occlusions and variations from the camera viewpoint. | {
"cite_N": [
"@cite_57",
"@cite_15",
"@cite_28"
],
"mid": [
"2145546283",
"2295474072",
""
],
"abstract": [
"In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skelet al joint locations from Kinect depth maps using 's method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms [25] on most of the cases.",
"With the development of depth sensors, low latency 3D human action recognition has become increasingly important in various interaction systems, where response with minimal latency is a critical process. High latency not only significantly degrades the interaction experience of users, but also makes certain interaction systems, e.g., gesture control or electronic gaming, unattractive. In this paper, we propose a novel active skeleton representation towards low latency human action recognition . First, we encode each limb of the human skeleton into a state through a Markov random field. The active skeleton is then represented by aggregating the encoded features of individual limbs. Finally, we propose a multi-channel multiple instance learning with maximum-pattern-margin to further boost the performance of the existing model. Our method is robust in calculating features related to joint positions, and effective in handling the unsegmented sequences. Experiments on the MSR Action3D, the MSR DailyActivity3D, and the Huawei 3DLife-2013 dataset demonstrate the effectiveness of the model with the proposed novel representation, and its superiority over the state-of-the-art low latency recognition approaches.",
""
]
} |
1709.08421 | 2759888401 | Automatically generating a summary of sports video poses the challenge of detecting interesting moments, or highlights, of a game. Traditional sports video summarization methods leverage editing conventions of broadcast sports video that facilitate the extraction of high-level semantics. However, user-generated videos are not edited, and thus traditional methods are not suitable to generate a summary. In order to solve this problem, this work proposes a novel video summarization method that uses players' actions as a cue to determine the highlights of the original video. A deep neural network-based approach is used to extract two types of action-related features and to classify video segments into interesting or uninteresting parts. The proposed method can be applied to any sports in which games consist of a succession of actions. Especially, this work considers the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. The method is trained using Kendo videos with ground truth labels that indicate the video highlights. The labels are provided by annotators possessing different experience with respect to Kendo to demonstrate how the proposed method adapts to different needs. The performance of the proposed method is compared with several combinations of different features, and the results show that it outperforms previous summarization methods. | Summarization of sports video focuses on extracting interesting moments (i.e., highlights) of a game. A major approach leverages editing conventions such as those present in broadcast TV programs. Editing conventions are common to almost all videos of a specific sport and allow automatic methods to extract high-level semantics @cite_33 @cite_25 . @cite_13 summarized broadcast soccer games by leveraging predefined camera angles in edited video to detect soccer field elements (e.g., goal posts). Similar work used slow-motion replays to determine key events in a game @cite_14 and predefined camera motion patterns to find scenes in which players scored in basketball soccer games @cite_56 . | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_56",
"@cite_13",
"@cite_25"
],
"mid": [
"1954922063",
"2040381230",
"2166946181",
"2141858776",
"1964660284"
],
"abstract": [
"We present a novel method for generating sports video summary highlights. Specifically, our method localizes semantically important events in sport programs by detecting slow motion replays of these events, and then generates highlights of these events at multiple levels. In our method, a hidden Markov model (HMM) is used to model slow motion replays, and an inference algorithm is introduced which computes the probability of a slow motion replay segment, and localizes the boundaries of the segment as well. An effective new feature is used in our HMM, based on a moving measure of the number of zero-crossings and the amplitudes of variations over time of video field differences. Furthermore, the method is capable of filtering out slow motion play segments in commercials. As compared with existing methods for video event detection, our method is more generic (ie, domain independent), and has the ability to capture inherently important events.",
"In this paper, we address the problem of querying video shots based on content-based matching. Our proposed system automatically partitions a video stream into video shots that maintain continuous movements of objects. Finding video shots of the same category is not an easy task because objects in a video shot change their locations over time. Our spatio-temporal pyramid matching (STPM) is the modified spatial pyramid matching (SPM), which considers temporal information in conjunction with spatial locations to match objects in video shots. In addition, we model the mathematical condition in which temporal information contributes to match video shots. In order to improve the matching performance, dynamic features including movements of objects are considered in addition to static features such as edges of objects. In our experiments, several methods based on different feature sets and matching methods are compared, and our spatio-temporal pyramid matching performed better than existing methods in video matching for sports videos.",
"Video semantic analysis is essential in video indexing and structuring. However, due to the lack of robust and generic algorithms, most of the existing works on semantic analysis are limited to specific domains. In this paper, we present a novel hidden Markove model (HMM)-based framework as a general solution to video semantic analysis. In the proposed framework, semantics in different granularities are mapped to a hierarchical model space, which is composed of detectors and connectors. In this manner, our model decomposes a complex analysis problem into simpler subproblems during the training process and automatically integrates those subproblems for recognition. The proposed framework is not only suitable for a broad range of applications, but also capable of modeling semantics in different semantic granularities. Additionally, we also present a new motion representation scheme, which is robust to different motion vector sources. The applications of the proposed framework in basketball event detection, soccer shot classification, and volleyball sequence analysis have demonstrated the effectiveness of the proposed framework on video semantic analysis.",
"We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions.",
"This paper presents a novel framework for video event detection. The core of the framework is an advanced temporal analysis and multimodal data mining method that consists of three major components: low-level feature extraction, temporal pattern analysis, and multimodal data mining. One of the unique characteristics of this framework is that it offers strong generality and extensibility with the capability of exploring representative event patterns with little human interference. The framework is presented with its application to the detection of the soccer goal events over a large collection of soccer video data with various production styles"
]
} |
1709.08421 | 2759888401 | Automatically generating a summary of sports video poses the challenge of detecting interesting moments, or highlights, of a game. Traditional sports video summarization methods leverage editing conventions of broadcast sports video that facilitate the extraction of high-level semantics. However, user-generated videos are not edited, and thus traditional methods are not suitable to generate a summary. In order to solve this problem, this work proposes a novel video summarization method that uses players' actions as a cue to determine the highlights of the original video. A deep neural network-based approach is used to extract two types of action-related features and to classify video segments into interesting or uninteresting parts. The proposed method can be applied to any sports in which games consist of a succession of actions. Especially, this work considers the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. The method is trained using Kendo videos with ground truth labels that indicate the video highlights. The labels are provided by annotators possessing different experience with respect to Kendo to demonstrate how the proposed method adapts to different needs. The performance of the proposed method is compared with several combinations of different features, and the results show that it outperforms previous summarization methods. | In addition to editing conventions, the structure of the sport also provides high-level semantics for summarization. Certain sports are structured in plays'' that are defined based on the rules of the sport and are often easily recognized in broadcast videos @cite_29 @cite_6 @cite_59 . For example, @cite_39 summarized American football games by leveraging their turn-based structure and recognizing down'' scenes from the video. Other methods used metadata in sports videos @cite_43 @cite_27 since it contains high-level descriptions (e.g., hits'' may be annotated in the metadata with their timestamps for a baseball game). A downside of these methods is that they cannot be applied to sports video without any editing conventions, structures, and metadata. Furthermore, they are based on heuristics, and thus it is difficult to generalize them to different sports. | {
"cite_N": [
"@cite_29",
"@cite_6",
"@cite_39",
"@cite_43",
"@cite_27",
"@cite_59"
],
"mid": [
"2144380653",
"2010405190",
"2155853791",
"2129175997",
"",
"1677064610"
],
"abstract": [
"This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90 recognition accuracy were achieved by sampling only about 1 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation.",
"Summarization is an essential requirement for achieving a more compact and interesting representation of sports video contents. We propose a framework that integrates highlights into play segments and reveal why we should still retain breaks. Experimental results show that fast detections of whistle sounds, crowd excitement, and text boxes can complement existing techniques for play-breaks and highlights localization.",
"We propose a general framework for event detection and summary generation in broadcast sports video. Under this framework, important events in a class of sports are modeled by \"plays\", defined according to the semantics of the particular sport and the conventional broadcasting patterns. We propose both deterministic and probabilistic approaches for the detection of the plays. The detected plays are concatenated to generate a compact, time compressed summary of the original video. Such a summary is complete in the sense that it contains every meaningful action of the underlying game, and it also servers as a much better starting point for higher-level summarization and or analysis than the original video does. We provide experimental results on American football, baseball, and sumo wrestling.",
"Video abstraction is defined as creating a video abstract which includes only important information in the original video streams. There are two general types of video abstracts, namely the dynamic and static ones. The dynamic video abstract is a 3-dimensional representation created by temporally arranging important scenes while the static video abstract is a 2-dimensional representation created by spatially arranging only keyframes of important scenes. In this paper, we propose a unified method of automatically creating these two types of video abstracts considering the semantic content targeting especially on broadcasted sports videos. For both types of video abstracts, the proposed method firstly determines the significance of scenes. A play scene, which corresponds to a play, is considered as a scene unit of sports videos, and the significance of every play scene is determined based on the play ranks, the time the play occurred, and the number of replays. This information is extracted from the metadata, which describes the semantic content of videos and enables us to consider not only the types of plays but also their influence on the game. In addition, user's preferences are considered to personalize the video abstracts. For dynamic video abstracts, we propose three approaches for selecting the play scenes of the highest significance: the basic criterion, the greedy criterion, and the play-cut criterion. For static video abstracts, we also propose an effective display style where a user can easily access target scenes from a list of keyframes by tracing the tree structures of sports games. We experimentally verified the effectiveness of our method by comparing our results with man-made video abstracts as well as by conducting questionnaires.",
"",
"A framework for analyzing baseball videos and generation of game summary is proposed. Due to the well-defined rules of baseball games, the system efficiently detects semantic units by the domain-related knowledge, and therefore, automatically discovers the structure of a baseball game. After extracting the information changes that are caused by some semantic events on the superimposed caption, a rule-based decision tree is applied to detect meaningful events. Only three types of information, including number-of-outs, score, and base occupation status, are taken in the detection process, and thus the framework detects events and produces summarization in an efficient and effective manner. The experimental results show the effectiveness of this framework and some research opportunities about generating semantic-level summary for sports videos."
]
} |
1709.08421 | 2759888401 | Automatically generating a summary of sports video poses the challenge of detecting interesting moments, or highlights, of a game. Traditional sports video summarization methods leverage editing conventions of broadcast sports video that facilitate the extraction of high-level semantics. However, user-generated videos are not edited, and thus traditional methods are not suitable to generate a summary. In order to solve this problem, this work proposes a novel video summarization method that uses players' actions as a cue to determine the highlights of the original video. A deep neural network-based approach is used to extract two types of action-related features and to classify video segments into interesting or uninteresting parts. The proposed method can be applied to any sports in which games consist of a succession of actions. Especially, this work considers the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. The method is trained using Kendo videos with ground truth labels that indicate the video highlights. The labels are provided by annotators possessing different experience with respect to Kendo to demonstrate how the proposed method adapts to different needs. The performance of the proposed method is compared with several combinations of different features, and the results show that it outperforms previous summarization methods. | Existing work also proposed several methods that are not based on heuristics. These methods leverage variations between scenes that are found in broadcast video (e.g., the close-up in a goal celebration in soccer). @cite_42 detected intensity variations in color frames to segment relevant events to summarize broadcast videos of soccer, basketball, and tennis. @cite_9 detected the extrema in the optical flow of a video to extract the frames with the highest action content and construct a summary for broadcast rugby video. These methods can be more generally applied to broadcast videos, but they lack high-level semantics, and thus the extracted scenes do not always correspond to the highlights of the game. | {
"cite_N": [
"@cite_9",
"@cite_42"
],
"mid": [
"2044073059",
"2076165727"
],
"abstract": [
"Non-annotated video is more common than ever and this fact leads to an emerging field called video summarization. Key frame selection using motion analysis can greatly increase the understanding of the video content by presenting a series of frames summarizing the intended video. In this paper, we present an automatic video summarization technique based on motion analysis. The proposed technique defines motion metrics estimated from two optical flow algorithms, each using two different key frame selection criteria. We conducted a subjective user study to evaluate the performance of the motion metrics. The summarization process is threshold free and experimental results have verified the effectiveness of the method.",
"An entropy-based criterion is proposed to characterize the pattern and intensity of object motion in a video sequence as a function of time. By applying a homoscedastic error model-based time series change point detection algorithm to this motion entropy curve, one is able to segment the corresponding video sequence into individual sections, each consisting of a semantically relevant event. The proposed method is tested on six hours of sports videos including basketball, soccer, and tennis. Excellent experimental results are observed."
]
} |
1709.07973 | 2759784938 | Despite impressive advances in simultaneous localization and mapping, dense robotic mapping remains challenging due to its inherent nature of being a high-dimensional inference problem. In this paper, we propose a dense semantic robotic mapping technique that exploits sparse Bayesian models, in particular, the relevance vector machine, for high-dimensional sequential inference. The technique is based on the principle of automatic relevance determination and produces sparse models that use a small subset of the original dense training set as the dominant basis. The resulting map posterior is continuous, and queries can be made efficiently at any resolution. Moreover, the technique has probabilistic outputs per semantic class through Bayesian inference. We evaluate the proposed relevance vector semantic map using publicly available benchmark datasets, NYU Depth V2 and KITTI; and the results show promising improvements over the state-of-the-art techniques. | Early works in dense semantic mapping back-project labels from a segmented image to the reconstructed 3D points, and assign each voxel or mesh face to the most frequent label according to a label histogram @cite_7 @cite_24 . Bayesian frameworks are also utilized to fuse labels from multiple views into a voxel-based 3D map. @cite_1 , probabilistic segmentation outputs of multiple images obtained by (RFs) are transfered into 3D and updated using a . @cite_23 , DA-RNN is proposed, which integrates a deep network for RGB-D video labeling into a dense 3D reconstruction framework built by KinectFusion. DA-RNN yields consist semantic labeling of indoor 3D scenes; however, it is assumed that semantic labels and geometric information are independent, and therefore, the consistency largely depends on the performance of data association computed by KinectFusion. The 3D label fusion is done by updating a probability vector of semantic classes and choosing the label with the maximum probability. | {
"cite_N": [
"@cite_24",
"@cite_23",
"@cite_1",
"@cite_7"
],
"mid": [
"1971618559",
"2604365427",
"2000063120",
"1682148205"
],
"abstract": [
"In this paper we propose a robust algorithm that generates an efficient and accurate dense 3D reconstruction with associated semantic labellings. Intelligent autonomous systems require accurate 3D reconstructions for applications such as navigation and localisation. Such systems also need to recognise their surroundings in order to identify and interact with objects of interest. Considerable emphasis has been given to generating a good reconstruction but less effort has gone into generating a 3D semantic model. The inputs to our algorithm are street level stereo image pairs acquired from a camera mounted on a moving vehicle. The depth-maps, generated from the stereo pairs across time, are fused into a global 3D volume online in order to accommodate arbitrary long image sequences. The street level images are automatically labelled using a Conditional Random Field (CRF) framework exploiting stereo images, and label estimates are aggregated to annotate the 3D volume. We evaluate our approach on the KITTI odometry dataset and have manually generated ground truth for object class segmentation. Our qualitative evaluation is performed on various sequences of the dataset and we also quantify our results on a representative subset.",
"3D scene understanding is important for robots to interact with the 3D world in a meaningful way. Most previous works on 3D scene understanding focus on recognizing geometrical or semantic properties of the scene independently. In this work, we introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. Experiments conducted on a real world dataset and a synthetic dataset with RGB-D videos demonstrate the ability of our method in semantic 3D scene mapping.",
"For task planning and execution in unstructured environments, a robot needs the ability to recognize and localize relevant objects. When this information is made persistent in a semantic map, it can be used, e. g., to communicate with humans. In this paper, we propose a novel approach to learning such maps. Our approach registers measurements of RGB-D cameras by means of simultaneous localization and mapping. We employ random decision forests to segment object classes in images and exploit dense depth measurements to obtain scale-invariance. Our object recognition method integrates shape and texture seamlessly. The probabilistic segmentation from multiple views is filtered in a voxel-based 3D map using a Bayesian framework. We report on the quality of our object-class segmentation method and demonstrate the benefits in accuracy when fusing multiple views in a semantic map.",
"In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth."
]
} |
1709.07973 | 2759784938 | Despite impressive advances in simultaneous localization and mapping, dense robotic mapping remains challenging due to its inherent nature of being a high-dimensional inference problem. In this paper, we propose a dense semantic robotic mapping technique that exploits sparse Bayesian models, in particular, the relevance vector machine, for high-dimensional sequential inference. The technique is based on the principle of automatic relevance determination and produces sparse models that use a small subset of the original dense training set as the dominant basis. The resulting map posterior is continuous, and queries can be made efficiently at any resolution. Moreover, the technique has probabilistic outputs per semantic class through Bayesian inference. We evaluate the proposed relevance vector semantic map using publicly available benchmark datasets, NYU Depth V2 and KITTI; and the results show promising improvements over the state-of-the-art techniques. | @cite_11 , a Voxel-CRF model is proposed to capture the geometric and semantic relationships by constructing a CRF over the 3D volume. A CRF-over-mesh model is also proposed for semantic modeling of both indoor and outdoor scenes @cite_27 . @cite_31 , a Kalman filter is used to transfer 2D class probabilities obtained by RFs to the 3D model, and 3D labels are further refined through a dense pairwise CRF over the point cloud. @cite_3 a similar RFs-CRFs framework is used together with an efficient mean-field CRF inference method to speed up the mapping process. @cite_30 , a higher-order CRF model is used to enforce temporal label consistency by generating higher-order from correspondences in an RGB-D video, which improved the precision of semantic maps. In SemanticFusion @cite_8 , a fully-connected CRF with Gaussian is applied to by incrementally updating probability distributions. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_3",
"@cite_27",
"@cite_31",
"@cite_11"
],
"mid": [
"2470820246",
"2523049145",
"1577931413",
"2045587041",
"",
""
],
"abstract": [
"The wide availability of affordable RGB-D sensors changes the landscape of indoor scene analysis. Years of research on simultaneous localization and mapping (SLAM) have made it possible to merge multiple RGB-D images into a single point cloud and provide a 3D model for a complete indoor scene. However, these reconstructed models only have geometry information, not including semantic knowledge. The advancements in robot autonomy and capabilities for carrying out more complex tasks in unstructured environments can be greatly enhanced by endowing environment models with semantic knowledge. Towards this goal, we propose a novel approach to generate 3D semantic maps for an indoor scene. Our approach creates a 3D reconstructed map from a RGB-D image sequence firstly, then we jointly infer the semantic object category and structural class for each point of the global map. 12 object categories (e.g. walls, tables, chairs) and 4 structural classes (ground, structure, furniture and props) are labeled in the global map. In this way, we can totally understand both the object and structure information. In order to get semantic information, we compute semantic segmentation for each RGB-D image and merge the labeling results by a Dense Conditional Random Field. Different from previous techniques, we use temporal information and higher-order cliques to enforce the label consistency for each image labeling result. Our experiments demonstrate that temporal information and higher-order cliques are significant for the semantic mapping procedure and can improve the precision of the semantic mapping results.",
"Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need to extend beyond geometry and appearance — they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of ≈25Hz.",
"In this paper, we present an efficient semantic segmentation framework for indoor scenes operating on 3D point clouds. We use the results of a Random Forest Classifier to initialize the unary potentials of a densely interconnected Conditional Random Field, for which we learn the parameters for the pairwise potentials from training data. These potentials capture and model common spatial relations between class labels, which can often be observed in indoor scenes. We evaluate our approach on the popular NYU Depth datasets, for which it achieves superior results compared to the current state of the art. Exploiting parallelization and applying an efficient CRF inference method based on mean field approximation, our framework is able to process full resolution Kinect point clouds in half a second on a regular laptop, more than twice as fast as comparable methods.",
"Semantic reconstruction of a scene is important for a variety of applications such as 3D modelling, object recognition and autonomous robotic navigation. However, most object labelling methods work in the image domain and fail to capture the information present in 3D space. In this work we propose a principled way to generate object labelling in 3D. Our method builds a triangulated meshed representation of the scene from multiple depth estimates. We then define a CRF over this mesh, which is able to capture the consistency of geometric properties of the objects present in the scene. In this framework, we are able to generate object hypotheses by combining information from multiple sources: geometric properties (from the 3D mesh), and appearance properties (from images). We demonstrate the robustness of our framework in both indoor and outdoor scenes. For indoor scenes we created an augmented version of the NYU indoor scene dataset (RGBD images) with object labelled meshes for training and evaluation. For outdoor scenes, we created ground truth object labellings for the KITTY odometry dataset (stereo image sequence). We observe a significant speed-up in the inference stage by performing labelling on the mesh, and additionally achieve higher accuracies.",
"",
""
]
} |
1709.07973 | 2759784938 | Despite impressive advances in simultaneous localization and mapping, dense robotic mapping remains challenging due to its inherent nature of being a high-dimensional inference problem. In this paper, we propose a dense semantic robotic mapping technique that exploits sparse Bayesian models, in particular, the relevance vector machine, for high-dimensional sequential inference. The technique is based on the principle of automatic relevance determination and produces sparse models that use a small subset of the original dense training set as the dominant basis. The resulting map posterior is continuous, and queries can be made efficiently at any resolution. Moreover, the technique has probabilistic outputs per semantic class through Bayesian inference. We evaluate the proposed relevance vector semantic map using publicly available benchmark datasets, NYU Depth V2 and KITTI; and the results show promising improvements over the state-of-the-art techniques. | Semantic mapping for outdoor scenes is important for robot applications such as autonomous driving. @cite_20 , a 3D semantic reconstruction approach for outdoor scenes using 3D CRFs is proposed. However, to achieve large-scale dense 3D maps, memory and computational efficiency can cause a bottleneck. @cite_26 , the memory-friendly hash-based 3D volumetric representation and a CRF are used for incremental dense semantic mapping of large scenes with a near real-time processing time. Semantic Octree @cite_15 constructs a higher-order CRF model over voxels in OctoMap @cite_33 to a multi-resolution 3D semantic map representation; higher-oder cliques are naturally defined as internal nodes in the hierarchical octree data structure. @cite_28 a similar CRF model together with 3D scrolling occupancy grids are proposed and the reported results are using KITTI dataset @cite_19 are promising. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_19",
"@cite_15",
"@cite_20"
],
"mid": [
"",
"2133844819",
"2736616937",
"2150066425",
"1595654559",
""
],
"abstract": [
"",
"Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.",
"Semantic 3D mapping can be used for many applications such as robot navigation and virtual interaction. In recent years, there has been great progress in semantic segmentation and geometric 3D mapping. However, it is still challenging to combine these two tasks for accurate and large-scale semantic mapping from images. In the paper, we propose an incremental and (near) real-time semantic mapping system. A 3D scrolling occupancy grid map is built to represent the world, which is memory and computationally efficient and bounded for large scale environments. We utilize the CNN segmentation as prior prediction and further optimize 3D grid labels through a novel CRF model. Superpixels are utilized to enforce smoothness and form robust P N high order potential. An efficient mean field inference is developed for the graph optimization. We evaluate our system on the KITTI dataset and improve the segmentation accuracy by 10 over existing systems.",
"Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti",
"On the one hand, mainly within the computer vision community, multi-resolution image labelling problems with pixel, super-pixel and object levels, have made great progress towards the modelling of holistic scene understanding. On the other hand, mainly within the robotics and graphics communities, multi-resolution 3D representations of the world have matured to be efficient and accurate. In this paper we bring together the two hands and move towards the new direction of unified recognition, reconstruction and representation. We tackle the problem by embedding an octree into a hierarchical robust PN Markov Random Field. This allows us to jointly infer the multi-resolution 3D volume along with the object-class labels, all within the constraints of an octree data-structure. The octree representation is chosen as this data-structure is efficient for further processing such as dynamic updates, data compression, and surface reconstruction. We perform experiments in inferring our semantic octree on the The kitti Vision Benchmark Suite in order to demonstrate its efficacy.",
""
]
} |
1709.07973 | 2759784938 | Despite impressive advances in simultaneous localization and mapping, dense robotic mapping remains challenging due to its inherent nature of being a high-dimensional inference problem. In this paper, we propose a dense semantic robotic mapping technique that exploits sparse Bayesian models, in particular, the relevance vector machine, for high-dimensional sequential inference. The technique is based on the principle of automatic relevance determination and produces sparse models that use a small subset of the original dense training set as the dominant basis. The resulting map posterior is continuous, and queries can be made efficiently at any resolution. Moreover, the technique has probabilistic outputs per semantic class through Bayesian inference. We evaluate the proposed relevance vector semantic map using publicly available benchmark datasets, NYU Depth V2 and KITTI; and the results show promising improvements over the state-of-the-art techniques. | A common feature of the works mentioned earlier is the discretization of the space prior to map inference which means, once the map is inferred, the prediction cannot be computed at any arbitrary points. In this paper, we propose a novel and alternative solution for the problem of dense 3D map building that is continuous and at the same time sparse. RVSM incrementally learns relevance vectors which are dominant basis in the data, and builds a sparse Bayesian model of the 3D map. As a result, the prediction can be made efficiently at any desired location. The training process that is often more expensive is accelerated by utilizing a sequential sparse Bayesian learning algorithm in @cite_22 . We evaluate RVSM using NYU Depth V2 (NYUDv2) @cite_6 and KITTI datasets and compare the achieved results with the recent works mentioned here. | {
"cite_N": [
"@cite_22",
"@cite_6"
],
"mid": [
"1633751774",
"125693051"
],
"abstract": [
"The ‘sparse Bayesian’ modelling approach, as exemplified by the ‘relevance vector machine’, enables sparse classification and regression functions to be obtained by linearly-weighting a small number of fixed basis functions from a large dictionary of potential candidates. Such a model conveys a number of advantages over the related and very popular ‘support vector machine’, but the necessary ‘training’ procedure — optimisation of the marginal likelihood function — is typically much slower. We describe a new and highly accelerated algorithm which exploits recently-elucidated properties of the marginal likelihood function to enable maximisation via a principled and efficient sequential addition and deletion of candidate basis functions.",
"We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation."
]
} |
1709.07862 | 2757690415 | We apply sequence-to-sequence model to mitigate the impact of speech recognition errors on open domain end-to-end dialog generation. We cast the task as a domain adaptation problem where ASR transcriptions and original text are in two different domains. In this paper, our proposed model includes two individual encoders for each domain data and make their hidden states similar to ensure the decoder predict the same dialog text. The method shows that the sequence-to-sequence model can learn the ASR transcriptions and original text pair having the same meaning and eliminate the speech recognition errors. Experimental results on Cornell movie dialog dataset demonstrate that the domain adaption system help the spoken dialog system generate more similar responses with the original text answers. | Different approaches are explored to address such problem in distinct applications. @cite_7 harnessed speech translation task by jointly learning ASR and machine translation to optimize bilingual evaluation understudy scores @cite_11 directly. This method alleviates the issue that the best ASR parameters on minimizing the traditional word error rate will only lead to sub-optimal performance. The same idea works for spoken content retrieval tasks. By modifying ASR training systems @cite_3 , the performance of retrieval-based models will be improved. However, all of these methods required modifying ASR modules. In this paper, we focus on mitigating the ASR errors based on a given ASR system. Without modifying ASR modules, interactive error recovery is applied to deal with errors in speech-to-speech translation system. @cite_9 employed conditional random field (CRF) models to detect ASR errors and attempted to resolve them by eliciting user feedback. In abstractive headline generation task for spoken content, @cite_18 proposed a method about considering ASR errors as a probability distribution. The work applied an attentive RNN to incorporate ASR error parameters into the attention mechanism. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_11"
],
"mid": [
"2963585309",
"2172130388",
"",
"1984076147",
"2101105183"
],
"abstract": [
"Headline generation for spoken content is important since spoken content is difficult to be shown on the screen and browsed by the user. It is a special type of abstractive summarization, for which the summaries are generated word by word from scratch without using any part of the original content. Many deep learning approaches for headline generation from text document have been proposed recently, all requiring huge quantities of training data, which is difficult for spoken document summarization. In this paper, we propose an ASR error modeling approach to learn the underlying structure of ASR error patterns and incorporate this model in an Attentive Recurrent Neural Network (ARNN) architecture. In this way, the model for abstractive headline generation for spoken content can be learned from abundant text data and the ASR data for some recognizers. Experiments showed very encouraging results and verified that the proposed ASR error model works well even when the input spoken content is recognized by a recognizer very different from the one the model learned from.",
"Automatic speech recognition (ASR) is an enabling technology for a wide range of information processing applications including speech translation, voice search (i.e., information retrieval with speech input), and conversational understanding. In these speech-centric applications, the output of ASR as “noisy” text is fed into down-stream processing systems to accomplish the designated tasks of translation, information retrieval, or natural language understanding, etc. In conventional applications, the ASR model as a sub-system is usually trained without considering the down-stream systems. This often leads to sub-optimal end-to-end performance. In this paper, we propose a unifying end-to-end optimization framework in which the model parameters in all sub-systems including ASR are learned by Extended Baum-Welch (EBW) algorithms via optimizing the criteria directly tied to the end-to-end performance measure. We demonstrate the effectiveness of the proposed approach on a speech translation task using the spoken language translation benchmark test of IWSLT. Our experimental results show that the proposed method leads to significant improvement of translation quality over the conventional techniques based on separate modular sub-system design. We also analyze the EBW-based optimization algorithms employed in our work and discuss its relationship with other popular optimization techniques.",
"",
"Spoken content retrieval refers to directly indexing and retrieving spoken content based on the audio rather than text descriptions. This potentially eliminates the requirement of producing text descriptions for multimedia content for indexing and retrieval purposes, and is able to precisely locate the exact time the desired information appears in the multimedia. Spoken content retrieval has been very successfully achieved with the basic approach of cascading automatic speech recognition (ASR) with text information retrieval: after the spoken content is transcribed into text or lattice format, a text retrieval engine searches over the ASR output to find desired information. This framework works well when the ASR accuracy is relatively high, but becomes less adequate when more challenging real-world scenarios are considered, since retrieval performance depends heavily on ASR accuracy. This challenge leads to the emergence of another approach to spoken content retrieval: to go beyond the basic framework of cascading ASR with text retrieval in order to have retrieval performances that are less dependent on ASR accuracy. This overview article is intended to provide a thorough overview of the concepts, principles, approaches, and achievements of major technical contributions along this line of investigation. This includes five major directions: 1) Modified ASR for Retrieval Purposes: cascading ASR with text retrieval, but the ASR is modified or optimized for spoken content retrieval purposes; 2) Exploiting the Information not present in ASR outputs: to try to utilize the information in speech signals inevitably lost when transcribed into phonemes and words; 3) Directly Matching at the Acoustic Level without ASR: for spoken queries, the signals can be directly matched at the acoustic level, rather than at the phoneme or word levels, bypassing all ASR issues; 4) Semantic Retrieval of Spoken Content: trying to retrieve spoken content that is semantically related to the query, but not necessarily including the query terms themselves; 5) Interactive Retrieval and Efficient Presentation of the Retrieved Objects: with efficient presentation of the retrieved objects, an interactive retrieval process incorporating user actions may produce better retrieval results and user experiences.",
"Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations."
]
} |
1709.07862 | 2757690415 | We apply sequence-to-sequence model to mitigate the impact of speech recognition errors on open domain end-to-end dialog generation. We cast the task as a domain adaptation problem where ASR transcriptions and original text are in two different domains. In this paper, our proposed model includes two individual encoders for each domain data and make their hidden states similar to ensure the decoder predict the same dialog text. The method shows that the sequence-to-sequence model can learn the ASR transcriptions and original text pair having the same meaning and eliminate the speech recognition errors. Experimental results on Cornell movie dialog dataset demonstrate that the domain adaption system help the spoken dialog system generate more similar responses with the original text answers. | There has been extensive prior works on domain transfer learning. Among them, most works focused on transferring deep neural network representations from a labeled source dataset to a target domain dataset. For example, @cite_10 proposed an adversarial domain adaptation method which tried to minimize the distance between the source and the target domain feature mappings. The main concept of these works is to guide feature learning by minimizing the difference between the source and target feature distributions @cite_16 @cite_15 @cite_1 . Common methods were using Maximum Mean Discrepancy (MMD) @cite_13 as loss to accomplish this purpose. MMD computes the norm of the difference between two domain means. Choosing an adversarial loss to minimize the domain shift is another common approach. One example is to add a domain classifier which predicts the binary domain label of the inputs, then follow by a domain confusion loss that encourages the classifier to predict as closely as possible to a uniform distribution over binary labels @cite_0 . Both methods provide measures to estimate the difference between two feature domain distributions and can be referenced as future works. | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"2951670162",
"2953226914",
"1565327149",
"1882958252",
"2949987290"
],
"abstract": [
"",
"Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.",
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task."
]
} |
1709.08172 | 2759512615 | We propose a novel image retrieval framework for visual saliency detection using information about salient objects contained within bounding box annotations for similar images. For each test image, we train a customized SVM from similar example images to predict the saliency values of its object proposals and generate an external saliency map (ES) by aggregating the regional scores. To overcome limitations caused by the size of the training dataset, we also propose an internal optimization module which computes an internal saliency map (IS) by measuring the low-level contrast information of the test image. The two maps, ES and IS, have complementary properties so we take a weighted combination to further improve the detection performance. Experimental results on several challenging datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods. | Significant improvement in saliency detection has been witnessed in the past decade. Numerous unsupervised and supervised saliency detection methods have been proposed under different theoretical models @cite_13 @cite_2 @cite_6 @cite_35 @cite_19 . However, few works address this problem from the perspective of image retrieval. | {
"cite_N": [
"@cite_35",
"@cite_6",
"@cite_19",
"@cite_2",
"@cite_13"
],
"mid": [
"2037954058",
"2047670868",
"1985984012",
"1588168368",
"2128272608"
],
"abstract": [
"Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.",
"Recent progresses in salient object detection have exploited the boundary prior, or background information, to assist other saliency cues such as contrast, achieving state-of-the-art results. However, their usage of boundary prior is very simple, fragile, and the integration with other cues is mostly heuristic. In this work, we present new methods to address these issues. First, we propose a robust background measure, called boundary connectivity. It characterizes the spatial layout of image regions with respect to image boundaries and is much more robust. It has an intuitive geometrical interpretation and presents unique benefits that are absent in previous saliency measures. Second, we propose a principled optimization framework to integrate multiple low level cues, including our background measure, to obtain clean and uniform saliency maps. Our formulation is intuitive, efficient and achieves state-of-the-art results on several benchmark datasets.",
"In diffusion-based saliency detection, an image is partitioned into superpixels and mapped to a graph, with superpixels as nodes and edge strengths proportional to superpixel similarity. Saliency information is then propagated over the graph using a diffusion process, whose equilibrium state yields the object saliency map. The optimal solution is the product of a propagation matrix and a saliency seed vector that contains a prior saliency assessment. This is obtained from either a bottom-up saliency detector or some heuristics. In this work, we propose a method to learn optimal seeds for object saliency. Two types of features are computed per superpixel: the bottom-up saliency of the superpixel region and a set of mid-level vision features informative of how likely the superpixel is to belong to an object. The combination of features that best discriminates between object and background saliency is then learned, using a large-margin formulation of the discriminant saliency principle. The propagation of the resulting saliency seeds, using a diffusion process, is finally shown to outperform the state of the art on a number of salient object detection datasets.",
"In this paper, we propose a novel adaptive metric learning algorithm (AML) for visual saliency detection. A key observation is that the saliency of a superpixel can be estimated by the distance from the most certain foreground and background seeds. Instead of measuring distance on the Euclidean space, we present a learning method based on two complementary Mahalanobis distance metrics: 1) generic metric learning (GML) and 2) specific metric learning (SML). GML aims at the global distribution of the whole training set, while SML considers the specific structure of a single image. Considering that multiple similarity measures from different views may enhance the relevant information and alleviate the irrelevant one, we try to fuse the GML and SML together and experimentally find the combining result does work well. Different from the most existing methods which are directly based on low-level features, we devise a superpixelwise Fisher vector coding approach to better distinguish salient objects from the background. We also propose an accurate seeds selection mechanism and exploit contextual and multiscale information when constructing the final saliency map. Experimental results on various image sets show that the proposed AML performs favorably against the state-of-the-arts.",
"A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail."
]
} |
1709.08172 | 2759512615 | We propose a novel image retrieval framework for visual saliency detection using information about salient objects contained within bounding box annotations for similar images. For each test image, we train a customized SVM from similar example images to predict the saliency values of its object proposals and generate an external saliency map (ES) by aggregating the regional scores. To overcome limitations caused by the size of the training dataset, we also propose an internal optimization module which computes an internal saliency map (IS) by measuring the low-level contrast information of the test image. The two maps, ES and IS, have complementary properties so we take a weighted combination to further improve the detection performance. Experimental results on several challenging datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods. | Most unsupervised algorithms are based on low-level features and perform saliency detection directly on the individual image. @cite_13 propose a saliency model which linearly combines image features including color, intensity and orientation over different scales to detect local conspicuity. However, this method tends to highlight the salient pixels and loses object information. @cite_6 propose a background measurement, boundary connectivity, to characterize the spatial layout of image regions. In @cite_35 , address saliency detection based on the global region contrast, which simultaneously considers the spatial coherence across the regions and the global contrast over the entire image. However, unsupervised algorithms lose object information and easily get affected by complex backgrounds. | {
"cite_N": [
"@cite_35",
"@cite_13",
"@cite_6"
],
"mid": [
"2037954058",
"2128272608",
"2047670868"
],
"abstract": [
"Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.",
"A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail.",
"Recent progresses in salient object detection have exploited the boundary prior, or background information, to assist other saliency cues such as contrast, achieving state-of-the-art results. However, their usage of boundary prior is very simple, fragile, and the integration with other cues is mostly heuristic. In this work, we present new methods to address these issues. First, we propose a robust background measure, called boundary connectivity. It characterizes the spatial layout of image regions with respect to image boundaries and is much more robust. It has an intuitive geometrical interpretation and presents unique benefits that are absent in previous saliency measures. Second, we propose a principled optimization framework to integrate multiple low level cues, including our background measure, to obtain clean and uniform saliency maps. Our formulation is intuitive, efficient and achieves state-of-the-art results on several benchmark datasets."
]
} |
1709.08172 | 2759512615 | We propose a novel image retrieval framework for visual saliency detection using information about salient objects contained within bounding box annotations for similar images. For each test image, we train a customized SVM from similar example images to predict the saliency values of its object proposals and generate an external saliency map (ES) by aggregating the regional scores. To overcome limitations caused by the size of the training dataset, we also propose an internal optimization module which computes an internal saliency map (IS) by measuring the low-level contrast information of the test image. The two maps, ES and IS, have complementary properties so we take a weighted combination to further improve the detection performance. Experimental results on several challenging datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods. | Supervised methods always take a large dataset of training samples and contain high-level object information when computing saliency maps. @cite_31 regard saliency detection as a binary labeling task and combine multi-features with a conditional random field (CRF) to generate the saliency maps. @cite_19 search for optimal seeds by combining bottom-up saliency maps and mid-level vision cues. However, training on a large dataset cannot ensure generating a good classifier, since it is hard to balance a large number of images with various appearances and categories. If the training set is not large enough, the classifier becomes less robust. Different to most supervised saliency detection methods, we train an optimal classifier for each test image by selecting training samples just from similar images instead of the whole training set. Our image retrieval framework considers the specificity of each individual image and better designs the training set, thus generating more accurate saliency maps. | {
"cite_N": [
"@cite_19",
"@cite_31"
],
"mid": [
"1985984012",
"1996326832"
],
"abstract": [
"In diffusion-based saliency detection, an image is partitioned into superpixels and mapped to a graph, with superpixels as nodes and edge strengths proportional to superpixel similarity. Saliency information is then propagated over the graph using a diffusion process, whose equilibrium state yields the object saliency map. The optimal solution is the product of a propagation matrix and a saliency seed vector that contains a prior saliency assessment. This is obtained from either a bottom-up saliency detector or some heuristics. In this work, we propose a method to learn optimal seeds for object saliency. Two types of features are computed per superpixel: the bottom-up saliency of the superpixel region and a set of mid-level vision features informative of how likely the superpixel is to belong to an object. The combination of features that best discriminates between object and background saliency is then learned, using a large-margin formulation of the discriminant saliency principle. The propagation of the resulting saliency seeds, using a diffusion process, is finally shown to outperform the state of the art on a number of salient object detection datasets.",
"In this paper, we study the salient object detection problem for images. We formulate this problem as a binary labeling task where we separate the salient object from the background. We propose a set of novel features, including multiscale contrast, center-surround histogram, and color spatial distribution, to describe a salient object locally, regionally, and globally. A conditional random field is learned to effectively combine these features for salient object detection. Further, we extend the proposed approach to detect a salient object from sequential images by introducing the dynamic salient features. We collected a large image database containing tens of thousands of carefully labeled images by multiple users and a video segment database, and conducted a set of experiments over them to demonstrate the effectiveness of the proposed approach."
]
} |
1709.08172 | 2759512615 | We propose a novel image retrieval framework for visual saliency detection using information about salient objects contained within bounding box annotations for similar images. For each test image, we train a customized SVM from similar example images to predict the saliency values of its object proposals and generate an external saliency map (ES) by aggregating the regional scores. To overcome limitations caused by the size of the training dataset, we also propose an internal optimization module which computes an internal saliency map (IS) by measuring the low-level contrast information of the test image. The two maps, ES and IS, have complementary properties so we take a weighted combination to further improve the detection performance. Experimental results on several challenging datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods. | In @cite_14 , also proposed to retrieve similar images for saliency detection. However, our approach is different from theirs in three aspects. First, we address saliency detection based on region proposals, which contain a large amount of shape and boundary information of salient regions and keep the consistency of the whole object or part of it. Second, our approach uses a more discriminative SVM, instead of distance-based classification, to better predict the saliency values of object proposals. Our annotation database consists of 50,000 images, which is large enough to contain similar examples for most test images. Third, unlike @cite_14 which relies purely on a retrieved list and thus potentially suffers from retrieval errors for uncommon objects, we use internal saliency cues with external high-level retrieved information to leverage the best out of both schemes. Our method combines the supervised and unsupervised algorithms, considering high-level object concepts and low-level contrast simultaneously, and thus can uniformly highlight the whole salient region with explicit object boundaries and achieves better performance on the PR curves. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2107363596"
],
"abstract": [
"We propose a novel framework for visual saliency detection based on a simple principle: images sharing their global visual appearances are likely to share similar salience. Assuming that an annotated image database is available, we first retrieve the most similar images to the target image; secondly, we build a simple classifier and we use it to generate saliency maps. Finally, we refine the maps and we extract thumbnails. We show that in spite of its simplicity, our framework outperforms state-of-the-art approaches. Another advantage is its ability to deal with visual pop-up and application task-driven saliency, if appropriately annotated images are available."
]
} |
1709.07963 | 2757338032 | Cloud RAN (C-RAN) is a promising enabler for distributed massive MIMO systems, yet is vulnerable to its fronthaul congestion. To cope with the limited fronthaul capacity, this paper proposes a hybrid analog-digital precoding design that adaptively adjusts fronthaul compression levels and the number of active radio-frequency (RF) chains out of the entire RF chains in a downlink distributed massive MIMO system based on C-RAN architecture. Following this structure, we propose an analog beamformer design in pursuit of maximizing multi-user sum average data rate (sum-rate). Each element of the analog beamformer is constructed based on a weighted sum of spatial channel covariance matrices, while the size of the analog beamformer, i.e. the number of active RF chains, is optimized so as to maximize the large-scale approximated sum-rate. With these analog beamformer and RF chain activation, a regularized zero- forcing (RZF) digital beamformer is jointly optimized based on the instantaneous effective channel information observed through the given analog beamformer. The effectiveness of the proposed hybrid precoding algorithm is validated by simulation, and its design criterion is clarified by analysis. | Hybrid precoder design has been investigated in @cite_7 @cite_25 @cite_29 @cite_18 @cite_32 @cite_30 . Its key structure is well summarized in @cite_18 , where digital beamforming is connected to RF-domain analog beamformers retaining smaller number of RF chains compared to a full digital beamformer. The increase in RF chains therefore improves the performance of a hybrid precoder until reaching its upper bound performance achieved by a full digital beamformer @cite_32 . In massive MIMO systems with a general digital beamformer, the minimum number of RF chains to achieve this upper bound performance has been specified in @cite_30 as twice the number of data streams between digital and analog beamformers. For an RZF digital beamformer, it has been shown by @cite_29 that allowing more RF chains is still beneficial. In massive MIMO C-RAN network, these performance gains induced by allowing more RF chains may diminish due to the capacity-limited fronthauls connecting digital and analog beamformers. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_7",
"@cite_29",
"@cite_32",
"@cite_25"
],
"mid": [
"1764924767",
"2962785465",
"2085098436",
"2611749634",
"2270735382",
"1956027752"
],
"abstract": [
"Large-scale multiple-input multiple-output (MIMO) systems enable high spectral efficiency by employing large antenna arrays at both the transmitter and the receiver of a wireless communication link. In traditional MIMO systems, full digital beamforming is done at the baseband; one distinct radio-frequency (RF) chain is required for each antenna, which for large-scale MIMO systems can be prohibitive from either cost or power consumption point of view. This paper considers a two-stage hybrid beamforming structure to reduce the number of RF chains for large-scale MIMO systems. The overall beamforming matrix consists of analog RF beamforming implemented using phase shifters and baseband digital beamforming of much smaller dimension. This paper considers precoder and receiver design for maximizing the spectral efficiency when the hybrid structure is used at both the transmitter and the receiver. On the theoretical front, bounds on the minimum number of transmit and receive RF chains that are required to realize the theoretical capacity of the large-scale MIMO system are presented. It is shown that the hybrid structure can achieve the same performance as the fully-digital beamforming scheme if the number of RF chains at each end is greater than or equal to twice the number of data streams. On the practical design front, this paper proposes a heuristic hybrid beamforming design strategy for the critical case where the number of RF chains is equal to the number of data streams, and shows that the performance of the proposed hybrid beamforming design can achieve spectral efficiency close to that of the fully-digital solution.",
"Hybrid multiple-antenna transceivers, which combine large-dimensional analog pre postprocessing with lower-dimensional digital processing, are the most promising approach for reducing the hardware cost and training overhead in massive MIMO systems. This article provides a comprehensive survey of the various incarnations of such structures that have been proposed in the literature. We provide a taxonomy in terms of the required channel state information, that is, whether the processing adapts to the instantaneous or average (second-order) channel state information; while the former provides somewhat better signal- to-noise and interference ratio, the latter has much lower overhead for CSI acquisition. We furthermore distinguish hardware structures of different complexities. Finally, we point out the special design aspects for operation at millimeter-wave frequencies.",
"Next-generation cellular standards may leverage the large bandwidth available at millimeter wave (mmWave) frequencies to provide gigabit-per-second data rates in outdoor wireless systems. A main challenge in realizing mmWave cellular is achieving sufficient operating link margin, which is enabled via directional beamforming with large antenna arrays. Due to the high cost and power consumption of high-bandwidth mixed-signal devices, mmWave beamforming will likely include a combination of analog and digital processing. In this paper, we develop an iterative hybrid beamforming algorithm for the single user mmWave channel. The proposed algorithm accounts for the limitations of analog beamforming circuitry and assumes only partial channel knowledge at both the base and mobile stations. The precoding strategy exploits the sparse nature of the mmWave channel and uses a variant of matching pursuit to provide simple solutions to the hybrid beamforming problem. Simulation results show that the proposed algorithm can approach the rates achieved by unconstrained digital beamforming solutions.",
"We propose a new hybrid precoding technique for massive multi-input multi-output (MIMO) systems using spatial channel covariance matrices in the analog precoder design. Applying a regularized zero-forcing precoder for the baseband precoding matrix, we find an unconstrained analog precoder that maximizes signal-to-leakage-plus-noise ratio (SLNR) while ignoring analog phase shifter constraints. Subsequently, we develop a technique to design a constrained analog precoder that mimics the obtained unconstrained analog precoder under phase shifter constraints. The main idea is to adopt an additional baseband precoding matrix, which we call a compensation matrix. We analyze the SLNR loss due to the proposed hybrid precoding compared to fully digital precoding, and determine which factors have a significant impact on this loss. In the simulations, we show that if the channel is spatially correlated and the number of users is smaller than the number of RF chains, the SLNR loss becomes negligible compared to fully digital precoding. The main benefit of our method stems from the use of spatial channel matrices in such a way that not only is each user's desired signal considered, but also the inter-user interference is incorporated in the analog precoder design.",
"This paper considers hybrid beamforming (HB) for downlink multiuser massive multiple-input multiple-output (MIMO) systems with frequency selective channels. The proposed HB design employs sets of digitally controlled phase (fixed phase) paired phase shifters (PSs) and switches. For this system, first we determine the required number of radio frequency (RF) chains and PSs such that the proposed HB achieves the same performance as that of the digital beamforming (DB) which utilizes @math (number of transmitter antennas) RF chains. We show that the performance of the DB can be achieved with our HB just by utilizing @math RF chains and @math PSs, where @math is the rank of the combined digital precoder matrices of all subcarriers. Second, we provide a simple and novel approach to reduce the number of PSs with only a negligible performance degradation. Numerical results reveal that only @math PSs per RF chain are sufficient for practically relevant parameter settings. Finally, for the scenario where the deployed number of RF chains @math is less than @math , we propose a simple user scheduling algorithm to select the best set of users in each subcarrier. Simulation results validate theoretical expressions, and demonstrate the superiority of the proposed HB design over the existing HB designs in both flat fading and frequency selective channels.",
"Antenna arrays will be an important ingredient in millimeter-wave (mmWave) cellular systems. A natural application of antenna arrays is simultaneous transmission to multiple users. Unfortunately, the hardware constraints in mmWave systems make it difficult to apply conventional lower frequency multiuser MIMO precoding techniques at mmWave. This paper develops low-complexity hybrid analog digital precoding for downlink multiuser mmWave systems. Hybrid precoding involves a combination of analog and digital processing that is inspired by the power consumption of complete radio frequency and mixed signal hardware. The proposed algorithm configures hybrid precoders at the transmitter and analog combiners at multiple receivers with a small training and feedback overhead. The performance of the proposed algorithm is analyzed in the large dimensional regime and in single-path channels. When the analog and digital precoding vectors are selected from quantized codebooks, the rate loss due to the joint quantization is characterized, and insights are given into the performance of hybrid precoding compared with analog-only beamforming solutions. Analytical and simulation results show that the proposed techniques offer higher sum rates compared with analog-only beamforming solutions, and approach the performance of the unconstrained digital beamforming with relatively small codebooks."
]
} |
1709.07963 | 2757338032 | Cloud RAN (C-RAN) is a promising enabler for distributed massive MIMO systems, yet is vulnerable to its fronthaul congestion. To cope with the limited fronthaul capacity, this paper proposes a hybrid analog-digital precoding design that adaptively adjusts fronthaul compression levels and the number of active radio-frequency (RF) chains out of the entire RF chains in a downlink distributed massive MIMO system based on C-RAN architecture. Following this structure, we propose an analog beamformer design in pursuit of maximizing multi-user sum average data rate (sum-rate). Each element of the analog beamformer is constructed based on a weighted sum of spatial channel covariance matrices, while the size of the analog beamformer, i.e. the number of active RF chains, is optimized so as to maximize the large-scale approximated sum-rate. With these analog beamformer and RF chain activation, a regularized zero- forcing (RZF) digital beamformer is jointly optimized based on the instantaneous effective channel information observed through the given analog beamformer. The effectiveness of the proposed hybrid precoding algorithm is validated by simulation, and its design criterion is clarified by analysis. | In C-RAN architecture, different configuration of precoding function splits between RRHs and the BBU have been proposed and summarized in @cite_2 @cite_31 , including the design with the RRHs being capable only of analog beamforming which is of our interest. For a given precoding design, fronthaul compression schemes to comply with the limited fronthaul capacity have been investigated via an information theoretic approach @cite_23 @cite_10 and under the use of a scalar quantizer @cite_34 @cite_12 . In these works, fronthaul forwarding information is compressed and the level of compression is adjusted so as to meet the fronthaul capacity. More compression, i.e. coarse quantization levels, induces larger quantization noise, ending up with deteriorating the useful received signal. The precoder is therefore optimized accordingly, which poses another challenge under limited fronthaul capacity that may not allow frequent CSI exchange between RRHs and the BBU and or bring about outdated CSI. | {
"cite_N": [
"@cite_23",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"1978842364",
"2019905317",
"1698719573",
"2180674075",
"2963757186"
],
"abstract": [
"",
"Cloud radio access networks (C-RANs) provide a novel architecture for next-generation wireless cellular systems whereby the baseband processing is migrated from the base stations (BSs) to a control unit (CU) in the ?cloud.? The BSs, which operate as radio units (RUs), are connected via fronthaul links to the managing CU. The fronthaul links carry information about the baseband signals?in the uplink from the RUs to the CU and vice versa in the downlink?in the form of quantized in-phase and quadrature (IQ) samples. Due to the large bit rate produced by the quantized IQ signals, compression prior to transmission on the fronthaul links is deemed to be of critical importance and is receiving considerable attention. This article provides a survey of the work in this area with emphasis on advanced signal processing solutions based on network information theoretic concepts. Analysis and numerical results illustrate the considerable performance gains to be expected for standard cellular models.",
"As a promising paradigm for fifth generation wireless communication systems, cloud radio access networks (C-RANs) have been shown to reduce both capital and operating expenditures, as well as to provide high spectral efficiency (SE) and energy efficiency (EE). The fronthaul in such networks, defined as the transmission link between the baseband unit and the remote radio head, requires a high capacity, but is often constrained. This article comprehensively surveys recent advances in fronthaul-constrained CRANs, including system architectures and key techniques. Particularly, major issues relating to the impact of the constrained fronthaul on SE EE and quality of service for users, including compression and quantization, large-scale coordinated processing and clustering, and resource allocation optimization, are discussed together with corresponding potential solutions. Open issues in terms of software-defined networking, network function virtualization, and partial centralization are also identified.",
"MIMO and cloud radio access network (C-RAN) are promising techniques for implementing future wireless communication systems, where a large number of antennas are deployed either being co-located at the base station or totally distributed at separate sites called remote radio heads (RRHs), both to achieve enormous spectrum efficiency and energy efficiency gains. Here, we consider a general antenna deployment method for wireless networks, termed multi-antenna C-RAN, where a flexible number of antennas can be equipped at each RRH to more effectively balance the performance and fronthaul complexity tradeoff beyond the conventional massive MIMO and single-antenna C-RAN. To coordinate and control the fronthaul traffic over multi-antenna RRHs, under the uplink communication setup, we propose a new “spatial-compression-and-forward (SCF)” scheme, where each RRH first performs a linear spatial filtering to denoise and maximally compress its received signals from multiple users to a reduced number of dimensions, then conducts uniform scalar quantization over each of the resulting dimensions in parallel, and finally sends the total quantized bits via a finite-rate fronthaul link to the baseband unit (BBU) for joint information decoding. Under this scheme, we maximize the minimum SINR of all users at the BBU by a joint resource allocation over the wireless transmission and fronthaul links. Specifically, each RRH determines its own spatial filtering solution in a distributed manner to reduce the signaling overhead with the BBU, while the BBU jointly optimizes the users’ transmit power, the RRHs’ fronthaul bits allocation, and the BBU’s receive beamforming with fixed spatial filters at individual RRHs. Numerical results show that, given a total number of antennas to be deployed, multi-antenna C-RAN with the proposed SCF and joint optimization significantly outperforms both massive MIMO and single-antenna C-RAN under practical fronthaul capacity constraints.",
"The implementation of a cloud radio access network (C-RAN) with full dimensional (FD) multiple-input multiple-output (MIMO) is faced with the challenge of controlling the fronthaul overhead for the transmission of baseband signals as the number of horizontal and vertical antennas grows larger. This paper proposes to leverage the special low-rank structure of the FD-MIMO channel, which is characterized by a time-invariant elevation component and a time-varying azimuth component, by means of a layered precoding approach, to reduce the fronthaul overhead. According to this scheme, separate precoding matrices are applied for the azimuth and elevation channel components, with different rates of adaptation to the channel variations and correspondingly different impacts on the fronthaul capacity. Moreover, we consider two different central unit (CU)-radio unit (RU) functional splits at the physical layer, namely, the conventional C-RAN implementation and an alternative one in which coding and precoding are performed at the RUs. Via numerical results, it is shown that the layered schemes significantly outperform conventional nonlayered schemes, particularly in the regime of low fronthaul capacity and a large number of vertical antennas.",
"The performance of cloud radio access network (C-RAN) is constrained by the limited fronthaul link capacity under future heavy data traffic. To tackle this problem, extensive efforts have been devoted to design efficient signal quantization compression techniques in the fronthaul to maximize the network throughput. However, most of the previous results are based on information-theoretical quantization methods, which are hard to implement practically due to the high complexity. In this paper, we propose using practical uniform scalar quantization in the uplink communication of an orthogonal frequency division multiple access (OFDMA) based C-RAN system, where the mobile users are assigned with orthogonal sub-carriers for transmission. In particular, we study the joint wireless power control and fronthaul quantization design over the sub-carriers to maximize the system throughput. Efficient algorithms are proposed to solve the joint optimization problem when either information-theoretical or practical fronthaul quantization method is applied. We show that the fronthaul capacity constraints have significant impact to the optimal wireless power control policy. As a result, the joint optimization shows significant performance gain compared with optimizing only wireless power control or fronthaul quantization. Besides, we also show that the proposed simple uniform quantization scheme performs very close to the throughput performance upper bound, and in fact overlaps with the upper bound when the fronthaul capacity is sufficiently large. Overall, our results reveal practically achievable throughput performance of C-RAN for its efficient deployment in the next-generation wireless communication systems."
]
} |
1709.07963 | 2757338032 | Cloud RAN (C-RAN) is a promising enabler for distributed massive MIMO systems, yet is vulnerable to its fronthaul congestion. To cope with the limited fronthaul capacity, this paper proposes a hybrid analog-digital precoding design that adaptively adjusts fronthaul compression levels and the number of active radio-frequency (RF) chains out of the entire RF chains in a downlink distributed massive MIMO system based on C-RAN architecture. Following this structure, we propose an analog beamformer design in pursuit of maximizing multi-user sum average data rate (sum-rate). Each element of the analog beamformer is constructed based on a weighted sum of spatial channel covariance matrices, while the size of the analog beamformer, i.e. the number of active RF chains, is optimized so as to maximize the large-scale approximated sum-rate. With these analog beamformer and RF chain activation, a regularized zero- forcing (RZF) digital beamformer is jointly optimized based on the instantaneous effective channel information observed through the given analog beamformer. The effectiveness of the proposed hybrid precoding algorithm is validated by simulation, and its design criterion is clarified by analysis. | One promising approach to resolve this precoding design problem under limited fronthaul capacity is to utilize spatial covariance of the channel instead of instantaneous CSI, which has been investigated in @cite_15 @cite_29 . Spatial covariance matrix is less frequently changing than instantaneous CSI, and can thus be estimated more easily as shown by @cite_22 @cite_7 @cite_17 @cite_5 @cite_6 . The precoding design based on spatial covariance matrix becomes more effective in massive MIMO systems in C-RAN architecture where their large number of antennas lead to huge amount of CSI to be estimated. As the number of antennas increases, it has been shown by @cite_8 @cite_9 that the instantaneous signal-to-interference-plus-noise ratio ( @math ) asymptotically converges to a deterministic value that is only a function of spatial covariance matrices. Such a deterministic equivalent can be regarded as the large-scale approximation and be exploited for precoding design that is no longer depending on instantaneous CSI @cite_21 @cite_1 @cite_3 . | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_29",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2145013959",
"2085098436",
"",
"2611749634",
"2084304524",
"2952850327",
"2536912378",
"1632803383",
"2725094597",
"2275567740",
"2015301486",
"2092397817"
],
"abstract": [
"This paper addresses the problem of channel estimation in multi-cell interference-limited cellular networks. We consider systems employing multiple antennas and are interested in both the finite and large-scale antenna number regimes (so-called \"massive MIMO\"). Such systems deal with the multi-cell interference by way of per-cell beamforming applied at each base station. Channel estimation in such networks, which is known to be hampered by the pilot contamination effect, constitutes a major bottleneck for overall performance. We present a novel approach which tackles this problem by enabling a low-rate coordination between cells during the channel estimation phase itself. The coordination makes use of the additional second-order statistical information about the user channels, which are shown to offer a powerful way of discriminating across interfering users with even strongly correlated pilot sequences. Importantly, we demonstrate analytically that in the large-number-of-antennas regime, the pilot contamination effect is made to vanish completely under certain conditions on the channel covariance. Gains over the conventional channel estimation framework are confirmed by our simulations for even small antenna array sizes.",
"Next-generation cellular standards may leverage the large bandwidth available at millimeter wave (mmWave) frequencies to provide gigabit-per-second data rates in outdoor wireless systems. A main challenge in realizing mmWave cellular is achieving sufficient operating link margin, which is enabled via directional beamforming with large antenna arrays. Due to the high cost and power consumption of high-bandwidth mixed-signal devices, mmWave beamforming will likely include a combination of analog and digital processing. In this paper, we develop an iterative hybrid beamforming algorithm for the single user mmWave channel. The proposed algorithm accounts for the limitations of analog beamforming circuitry and assumes only partial channel knowledge at both the base and mobile stations. The precoding strategy exploits the sparse nature of the mmWave channel and uses a variant of matching pursuit to provide simple solutions to the hybrid beamforming problem. Simulation results show that the proposed algorithm can approach the rates achieved by unconstrained digital beamforming solutions.",
"",
"We propose a new hybrid precoding technique for massive multi-input multi-output (MIMO) systems using spatial channel covariance matrices in the analog precoder design. Applying a regularized zero-forcing precoder for the baseband precoding matrix, we find an unconstrained analog precoder that maximizes signal-to-leakage-plus-noise ratio (SLNR) while ignoring analog phase shifter constraints. Subsequently, we develop a technique to design a constrained analog precoder that mimics the obtained unconstrained analog precoder under phase shifter constraints. The main idea is to adopt an additional baseband precoding matrix, which we call a compensation matrix. We analyze the SLNR loss due to the proposed hybrid precoding compared to fully digital precoding, and determine which factors have a significant impact on this loss. In the simulations, we show that if the channel is spatially correlated and the number of users is smaller than the number of RF chains, the SLNR loss becomes negligible compared to fully digital precoding. The main benefit of our method stems from the use of spatial channel matrices in such a way that not only is each user's desired signal considered, but also the inter-user interference is incorporated in the analog precoder design.",
"We consider the uplink (UL) and downlink (DL) of non-cooperative multi-cellular time-division duplexing (TDD) systems, assuming that the number N of antennas per base station (BS) and the number K of user terminals (UTs) per cell are large. Our system model accounts for channel estimation, pilot contamination, and an arbitrary path loss and antenna correlation for each link. We derive approximations of achievable rates with several linear precoders and detectors which are proven to be asymptotically tight, but accurate for realistic system dimensions, as shown by simulations. It is known from previous work assuming uncorrelated channels, that as N→∞ while K is fixed, the system performance is limited by pilot contamination, the simplest precoders detectors, i.e., eigenbeamforming (BF) and matched filter (MF), are optimal, and the transmit power can be made arbitrarily small. We analyze to which extent these conclusions hold in the more realistic setting where N is not extremely large compared to K. In particular, we derive how many antennas per UT are needed to achieve η of the ultimate performance limit with infinitely many antennas and how many more antennas are needed with MF and BF to achieve the performance of minimum mean-square error (MMSE) detection and regularized zero-forcing (RZF), respectively.",
"Obtaining accurate Channel State Information (CSI) at the transmitters (TX) is critical to many cooperation schemes such as Network MIMO, Interference Alignment etc. Practical CSI feedback and limited backhaul-based sharing inevitably creates degradations of CSI which are specific to each TX, giving rise to a distributed form of CSI. In the Distributed CSI (D-CSI) broadcast channel setting, the various TXs design elements of the precoder based on their individual estimates of the global multiuser channel matrix, which intuitively degrades performance when compared with the commonly used centralized CSI assumption. This paper tackles this challenging scenario and presents a first analysis of the rate performance for the distributed CSI multi-TX broadcast channel setting, in the large number of antenna regime. Using Random Matrix Theory (RMT) tools, we derive deterministic equivalents of the Signal to Interference plus Noise Ratio (SINR) for the popular regularized Zero-Forcing (ZF) precoder, allowing to unveil the price of distributedness for such cooperation methods.",
"",
"The tremendous bandwidth available in the millimeter wave frequencies above 10 GHz have made these bands an attractive candidate for next-generation cellular systems. However, reliable communication at these frequencies depends critically on beamforming with very high-dimensional antenna arrays. Estimating the channel sufficiently accurately to perform beamforming can be challenging due to both low coherence time and a large number of antennas. Also, the measurements used for channel estimation may need to be made with analog beamforming, where the receiver can “look” in only one direction at a time. This paper presents a novel method for estimation of the receive-side spatial covariance matrix of a channel from a sequence of power measurements made in different angular directions. It is shown that maximum likelihood estimation of the covariance matrix reduces to a non-negative matrix completion problem. We show that the non-negative nature of the covariance matrix reduces the number of measurements required when the matrix is low-rank. The fast iterative methods are presented to solve the problem. Simulations are presented for both single-path and multi-path channels using models derived from real measurements in New York City at 28 GHz.",
"The Interfering Broadcast Channel (IBC) applies to the downlink of (cellular and or heterogeneous) multi-cell networks, which are limited by multi-user (MU) interference. The interference alignment (IA) concept has shown that interference does not need to be inevitable. In particular spatial IA in the MIMO IBC allows for low latency transmission. However, IA requires perfect and typically global Channel State Information at the Transmitter(s) (CSIT), whose acquisition does not scale well with network size. Also, the design of transmitters (Txs) and receivers (Rxs) is coupled and hence needs to be centralized (cloud) or duplicated (distributed approach). CSIT, which is crucial in MU systems, is always imperfect in practice. We consider the joint optimal exploitation of mean (channel estimates) and covariance Gaussian partial CSIT. Indeed, in a Massive MIMO (MaMIMO) setting (esp. when combined with mmWave) the channel covariances may exhibit low rank and zero-forcing might be possible by just exploiting the covariance subspaces. But the question is the optimization of beamformers for the expected weighted sum rate (EWSR) at finite SNR. We propose explicit beamforming solutions and indicate that existing large system analysis can be extended to handle optimized beamformers with the more general partial CSIT considered here.",
"Channel covariance is emerging as a critical ingredient of the acquisition of instantaneous channel state information (CSI) in multi-user Massive MIMO systems operating in frequency division duplex (FDD) mode. In this context, channel reciprocity does not hold, and it is generally expected that covariance information about the downlink channel must be estimated and fed back by the user equipment (UE). As an alternative CSI acquisition technique, we propose to infer the downlink covariance based on the observed uplink covariance. This inference process relies on a dictionary of uplink downlink covariance matrices, and on interpolation in the corresponding Riemannian space; once the dictionary is known, the estimation does not rely on any form of feedback from the UE. In this article, we present several variants of the interpolation method, and benchmark them through simulations.",
"We propose joint spatial division and multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter (CSIT). JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing (FDD) systems, for which uplink downlink channel reciprocity cannot be exploited. In the proposed scheme, the multiuser MIMO downlink precoder is obtained by concatenating a prebeamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional “effective” channel matrix. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is approached in the large number of antennas limit. For this case, we use Szego's asymptotic theory of Toeplitz matrices to show that a DFT-based prebeamforming matrix is near-optimal, requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a 2-D base station antenna array, with 3-D beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the prebeamforming optimization and calculate the system spectral efficiency under proportional fairness and max-min fairness criteria, showing extremely attractive performance. Our numerical results are obtained via asymptotic random matrix theory, avoiding lengthy Monte Carlo simulations and providing accurate results for realistic (finite) number of antennas and users.",
"Hybrid multiple input multiple output (MIMO) systems consist of an analog beamformer with large antenna arrays followed by a digital MIMO processor. Channel estimation for hybrid MIMO systems in millimeter wave (mm-wave) communications is challenging because of the large antenna array and the low signal-to-noise ratio (SNR) before beamforming. In this paper, we propose an open-loop channel estimator for mm-wave hybrid MIMO systems exploiting the sparse nature of mm-wave channels. A sparse signal recovery problem is formulated for channel estimation and solved by the orthogonal matching pursuit (OMP) based methods. A modification of the OMP algorithm, called the multi-grid (MG) OMP, is proposed. It is shown that the MG-OMP can significantly reduce the computational load of the OMP method. A process for designing the training beams is also developed. Specifically, given the analog training beams the baseband processor for beam training is designed. Simulation results demonstrate the advantage of the OMP based methods over the conventional least squares (LS) method and the efficiency of the MG-OMP over the original OMP."
]
} |
1709.08201 | 2760355785 | Transfer learning significantly accelerates the reinforcement learning process by exploiting relevant knowledge from previous experiences. The problem of optimally selecting source policies during the learning process is of great importance yet challenging. There has been little theoretical analysis of this problem. In this paper, we develop an optimal online method to select source policies for reinforcement learning. This method formulates online source policy selection as a multi-armed bandit problem and augments Q-learning with policy reuse. We provide theoretical guarantees of the optimal selection process and convergence to the optimal policy. In addition, we conduct experiments on a grid-based robot navigation domain to demonstrate its efficiency and robustness by comparing to the state-of-the-art transfer learning method. | Some related works focus on multi-task learning (MTL), which is very similar to transfer learning. MTL assumes that all MDPs are drawn from the same distribution and learning is parallel on several tasks @cite_18 . In contrast, we make no assumption regarding the distribution over MDPs and concentrate on transfer learning problem. In one previous MTL work, @cite_22 represented the distribution of MDPs with a hierarchical Bayesian model. The continuously updated distribution served as a prior for rapid learning in new environments. But as mentioned in their work, their algorithm is not computationally efficient. In more recent MTL works, @cite_16 proposed a technique that involves two phases of learning to reduce sample complexity of RL. @cite_28 determined the most similar source tasks based on compliance, which can be interpreted as a sort of distance metric between tasks. | {
"cite_N": [
"@cite_28",
"@cite_18",
"@cite_16",
"@cite_22"
],
"mid": [
"2169911641",
"2287257424",
"2952448454",
"2169743339"
],
"abstract": [
"When transferring knowledge between reinforcement learning agents with different state representations or actions, past knowledge must be efficiently mapped to novel tasks so that it aids learning. The majority of the existing approaches use pre-defined mappings provided by a domain expert. To overcome this limitation and enable autonomous transfer learning, this paper introduces a method for weighting and using multiple inter-task mappings based on a probabilistic framework. Experimental results show that the use of multiple inter-task mappings, accompanied with a probabilistic selection mechanism, can significantly boost the performance of transfer learning relative to 1) learning without transfer and 2) using a single hand-picked mapping. We especially introduce novel tasks for transfer learning in a realistic simulation of the iCub robot, demonstrating the ability of the method to select mappings in complex tasks where human intuition could not be applied to select them. The results verified the efficacy of the proposed approach in a real world and complex environment.",
"In this work, we design and evaluate a computational learning model that enables a human-robot team to co-develop joint strategies for performing novel tasks that require coordination. The joint strategies are learned through \"perturbation training,\" a human team-training strategy that requires team members to practice variations of a given task to help their team generalize to new variants of that task. We formally define the problem of human-robot perturbation training and develop and evaluate the first end-to-end framework for such training, which incorporates a multi-agent transfer learning algorithm, human-robot co-learning framework and communication protocol. Our transfer learning algorithm, Adaptive Perturbation Training (AdaPT), is a hybrid of transfer and reinforcement learning techniques that learns quickly and robustly for new task variants. We empirically validate the benefits of AdaPT through comparison to other hybrid reinforcement and transfer learning techniques aimed at transferring knowledge from multiple source tasks to a single target task. @PARASPLIT We also demonstrate that AdaPT's rapid learning supports live interaction between a person and a robot, during which the human-robot team trains to achieve a high level of performance for new task variants. We augment AdaPT with a co-learning framework and a computational bi-directional communication protocol so that the robot can co-train with a person during live interaction. Results from large-scale human subject experiments (n=48) indicate that AdaPT enables an agent to learn in a manner compatible with a human's own learning process, and that a robot undergoing perturbation training with a human results in a high level of team performance. Finally, we demonstrate that human-robot training using AdaPT in a simulation environment produces effective performance for a team incorporating an embodied robot partner.",
"Transferring knowledge across a sequence of reinforcement-learning tasks is challenging, and has a number of important applications. Though there is encouraging empirical evidence that transfer can improve performance in subsequent reinforcement-learning tasks, there has been very little theoretical analysis. In this paper, we introduce a new multi-task algorithm for a sequence of reinforcement-learning tasks when each task is sampled independently from (an unknown) distribution over a finite set of Markov decision processes whose parameters are initially unknown. For this setting, we prove under certain assumptions that the per-task sample complexity of exploration is reduced significantly due to transfer compared to standard single-task algorithms. Our multi-task algorithm also has the desired characteristic that it is guaranteed not to exhibit negative transfer: in the worst case its per-task sample complexity is comparable to the corresponding single-task algorithm.",
"We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknown distribution. We model the distribution over MDPs using a hierarchical Bayesian infinite mixture model. For each novel MDP, we use the previously learned distribution as an informed prior for modelbased Bayesian reinforcement learning. The hierarchical Bayesian framework provides a strong prior that allows us to rapidly infer the characteristics of new environments based on previous environments, while the use of a nonparametric model allows us to quickly adapt to environments we have not encountered before. In addition, the use of infinite mixtures allows for the model to automatically learn the number of underlying MDP components. We evaluate our approach and show that it leads to significant speedups in convergence to an optimal policy after observing only a small number of tasks."
]
} |
1709.07911 | 2758277898 | Imitation learning holds the promise to address challenging robotic tasks such as autonomous navigation. It however requires a human supervisor to oversee the training process and send correct control commands to robots without feedback, which is always prone to error and expensive. To minimize human involvement and avoid manual labeling of data in the robotic autonomous navigation with imitation learning, this paper proposes a novel semi-supervised imitation learning solution based on a multi-sensory design. This solution includes a suboptimal sensor policy based on sensor fusion to automatically label states encountered by a robot to avoid human supervision during training. In addition, a recording policy is developed to throttle the adversarial affect of learning too much from the suboptimal sensor policy. This solution allows the robot to learn a navigation policy in a self-supervised manner. With extensive experiments in indoor environments, this solution can achieve near human performance in most of the tasks and even surpasses human performance in case of unexpected events such as hardware failures or human operation errors. To best of our knowledge, this is the first work that synthesizes sensor fusion and imitation learning to enable robotic autonomous navigation in the real world without human supervision. | Another approach to robotic navigation using imitation learning is to set up multiple cameras to capture training samples in different directions @cite_13 @cite_21 . Multiple cameras are installed on a drone or car to collect training samples. Each sample is labeled according to camera positions. This method tackles the data mismatch problem by training the policy with samples from different view directions. However, this data collection strategy only works properly in the domain of lane following. In addition, installing multiple cameras on small robots is challenging or impractical. | {
"cite_N": [
"@cite_21",
"@cite_13"
],
"mid": [
"2342840547",
"2296673577"
],
"abstract": [
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",
"We study the problem of perceiving forest or mountain trails from a single monocular image acquired from the viewpoint of a robot traveling on the trail itself. Previous literature focused on trail segmentation, and used low-level features such as image saliency or appearance contrast; we propose a different approach based on a deep neural network used as a supervised image classifier. By operating on the whole image at once, our system outputs the main direction of the trail compared to the viewing direction. Qualitative and quantitative results computed on a large real-world dataset (which we provide for download) show that our approach outperforms alternatives, and yields an accuracy comparable to the accuracy of humans that are tested on the same image classification task. Preliminary results on using this information for quadrotor control in unseen trails are reported. To the best of our knowledge, this is the first letter that describes an approach to perceive forest trials, which is demonstrated on a quadrotor micro aerial vehicle."
]
} |
1709.07911 | 2758277898 | Imitation learning holds the promise to address challenging robotic tasks such as autonomous navigation. It however requires a human supervisor to oversee the training process and send correct control commands to robots without feedback, which is always prone to error and expensive. To minimize human involvement and avoid manual labeling of data in the robotic autonomous navigation with imitation learning, this paper proposes a novel semi-supervised imitation learning solution based on a multi-sensory design. This solution includes a suboptimal sensor policy based on sensor fusion to automatically label states encountered by a robot to avoid human supervision during training. In addition, a recording policy is developed to throttle the adversarial affect of learning too much from the suboptimal sensor policy. This solution allows the robot to learn a navigation policy in a self-supervised manner. With extensive experiments in indoor environments, this solution can achieve near human performance in most of the tasks and even surpasses human performance in case of unexpected events such as hardware failures or human operation errors. To best of our knowledge, this is the first work that synthesizes sensor fusion and imitation learning to enable robotic autonomous navigation in the real world without human supervision. | Recently, deep reinforcement learning (DRL) @cite_4 has been attracted a lot attention in the robotic control field. Many works based on DRL have been performed to address robotic navigation tasks. In @cite_12 , the authors train a DQN (deep q learning) agent @cite_4 to cross an intersection in simulation. Another work proposes a simulated environment to train a DRL agent to reach a target position in indoor environments @cite_11 . @cite_8 proposes deep deterministic policy gradient (DDPG) to train an agent to avoid dynamic obstacles in simulation. These methods however are all constrained in simulated settings where damage to the agents is not a concern. Applying DRL to real environments is still challenging. Many researchers attempt to address this problem by transferring learned DRL policies from simulation to real world navigation tasks @cite_15 @cite_19 . The difference between simulated environments and the real world settings makes this adaption difficult. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_19",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"2145339207",
"2963864421",
"2565902248",
"2605102758",
"2611629860",
"2522340145"
],
"abstract": [
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"Deep reinforcement learning has emerged as a promising and powerful technique for automatically acquiring control policies that can process raw sensory inputs, such as images, and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. In this paper, we explore the following question: can we train vision-based navigation policies entirely in simulation, and then transfer them into the real world to achieve real-world flight without a single real training image? We propose a learning method that we call CAD @math RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models. Our method uses single RGB images from a monocular camera, without needing to explicitly reconstruct the 3D geometry of the environment or perform explicit motion planning. Our learned collision avoidance policy is represented by a deep convolutional neural network that directly processes raw monocular images and outputs velocity commands. This policy is trained entirely on simulated images, with a Monte Carlo policy evaluation algorithm that directly optimizes the network's ability to produce collision-free flight. By highly randomizing the rendering settings for our simulated training set, we show that we can train a policy that generalizes to the real world, without requiring the simulator to be particularly realistic or high-fidelity. We evaluate our method by flying a real quadrotor through indoor environments, and further evaluate the design choices in our simulator through a series of ablation studies on depth prediction. For supplementary video see: this https URL",
"Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"Providing an efficient strategy to navigate safely through unsignaled intersections is a difficult task that requires determining the intent of other drivers. We explore the effectiveness of Deep Reinforcement Learning to handle intersection problems. Using recent advances in Deep RL, we are able to learn policies that surpass the performance of a commonly-used heuristic approach in several metrics including task completion time and goal success rate and have limited ability to generalize. We then explore a system's ability to learn active sensing behaviors to enable navigating safely in the case of occlusions. Our analysis, provides insight into the intersection handling problem, the solutions learned by the network point out several shortcomings of current rule-based methods, and the failures of our current deep reinforcement learning system point to future research directions.",
"Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. The supplementary video can be accessed at the following link: this https URL"
]
} |
1709.08074 | 2757220759 | This paper addresses automatic extraction of abbreviations (encompassing acronyms and initialisms) and corresponding long-form expansions from plain unstructured text. We create and are going to release a multilingual resource for abbreviations and their corresponding expansions, built automatically by exploiting Wikipedia redirect and disambiguation pages, that can be used as a benchmark for evaluation. We address a shortcoming of previous work where only the redirect pages were used, and so every abbreviation had only a single expansion, even though multiple different expansions are possible for many of the abbreviations. We also develop a principled machine learning based approach to scoring expansion candidates using different techniques such as indicators of near synonymy, topical relatedness, and surface similarity. We show improved performance over seven languages, including two with a non-Latin alphabet, relative to strong baselines. | In general, rule-based approaches find short-forms according to rules based on case and punctuation. Long-form candidates are gathered from a window around short-form mentions, possibly requiring a connecting pattern like parentheses. The short-form and long-form are then matched by rules, usually according to the occurrence and order of the short-form letters in the long-form and often requiring a language specific stop-word list for filtering potential errors in output @cite_6 , @cite_1 , @cite_11 , @cite_14 . In fact, even many machine learning based systems also use a stop-word list @cite_7 , @cite_5 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_11"
],
"mid": [
"1525742988",
"2120609672",
"1989534258",
"",
"2950042516",
""
],
"abstract": [
"Objective: To develop methods that automatically map abbreviations to their full forms in biomedical articles. @PARASPLIT Methods: The authors developed two methods of mapping defined and undefined abbreviations (defined abbreviations are paired with their full forms in the articles, whereas undefined ones are not). For defined abbreviations, they developed a set of pattern-matching rules to map an abbreviation to its full form and implemented the rules into a software program, AbbRE (for “abbreviation recognition and extraction”). Using the opinions of domain experts as a reference standard, they evaluated the recall and precision of AbbRE for defined abbreviations in ten biomedical articles randomly selected from the ten most frequently cited medical and biological journals. They also measured the percentage of undefined abbreviations in the same set of articles, and they investigated whether they could map undefined abbreviations to any of four public abbreviation databases (GenBank LocusLink, swissprot, LRABR of the UMLS Specialist Lexicon, and Bioabacus). @PARASPLIT Results: AbbRE had an average 0.70 recall and 0.95 precision for the defined abbreviations. The authors found that an average of 25 percent of abbreviations were defined in biomedical articles and that of a randomly selected subset of undefined abbreviations, 68 percent could be mapped to any of four abbreviation databases. They also found that many abbreviations are ambiguous (i.e., they map to more than one full form in abbreviation databases). @PARASPLIT Conclusion: AbbRE is efficient for mapping defined abbreviations. To couple AbbRE with abbreviation databases for the mapping of undefined abbreviations, not only exhaustive abbreviation databases but also a method to resolve the ambiguity of abbreviations in the databases are needed.",
"Motivation: Acronyms result from a highly productive type of term variation and trigger the need for an acronym dictionary to establish associations between acronyms and their expanded forms. Results: We propose a novel method for recognizing acronym definitions in a text collection. Assuming a word sequence co-occurring frequently with a parenthetical expression to be a potential expanded form, our method identifies acronym definitions in a similar manner to the statistical term recognition task. Applied to the whole MEDLINE (7 811 582 abstracts), the implemented system extracted 886 755 acronym candidates and recognized 300 954 expanded forms in reasonable time. Our method outperformed base-line systems, achieving 99 precision and 82--95 recall on our evaluation corpus that roughly emulates the whole MEDLINE. Availability and Supplementary information: The implementations and supplementary information are available at our web site: http: www.chokkan.org research acromine Contact: okazaki@mi.ci.i.u-tokyo.ac.jp",
"We implemented a web server for acronym and abbreviation lookup, containing a collection of acronyms and their expansions gathered from a large number of web pages by a heuristic extraction process. Several different extraction algorithms were evaluated and compared. The corpus resulting from the best algorithm is comparable to a high-quality hand-crafted site, but has the potential to be much more inclusive as data from more web pages are processed.",
"",
"We are presenting work on recognising acronyms of the form Long-Form (Short-Form) such as \"International Monetary Fund (IMF)\" in millions of news articles in twenty-two languages, as part of our more general effort to recognise entities and their variants in news text and to use them for the automatic analysis of the news, including the linking of related news across languages. We show how the acronym recognition patterns, initially developed for medical terms, needed to be adapted to the more general news domain and we present evaluation results. We describe our effort to automatically merge the numerous long-form variants referring to the same short-form, while keeping non-related long-forms separate. Finally, we provide extensive statistics on the frequency and the distribution of short-form long-form pairs across languages.",
""
]
} |
1709.07626 | 2758133487 | Recurrent neural networks (RNNs) have shown promising results in audio and speech processing applications due to their strong capabilities in modelling sequential data. In many applications, RNNs tend to outperform conventional models based on GMM UBMs and i-vectors. Increasing popularity of IoT devices makes a strong case for implementing RNN based inferences for applications such as acoustics based authentication, voice commands, and edge analytics for smart homes. Nonetheless, the feasibility and performance of RNN based inferences on resources-constrained IoT devices remain largely unexplored. In this paper, we investigate the feasibility of using RNNs for an end-to-end authentication system based on breathing acoustics. We evaluate the performance of RNN models on three types of devices; smartphone, smartwatch, and Raspberry Pi and show that unlike CNN models, RNN models can be easily ported onto resource-constrained devices without a significant loss in accuracy. | Early work by @cite_2 investigated the performance characteristics, resource requirements and the execution bottlenecks for deep learning models (CNN and DNN) on mobile, wearable, and IOT devices, to support audio and vision based apps. Results indicated that although smaller deep learning models work without issues on these devices, more complex CNN models such as AlexNet do not work well under the resource constraints. To address this problem, @cite_8 proposed , which focuses primarily on finding a sparse representation of the fully connected layers and on using separate filters to separate the convolutional kernels. These techniques reduce the number of parameters and convolutional operations required to execute a deep learning model, and can thus significantly reduce the computational and space complexity on resource-constrained devices. | {
"cite_N": [
"@cite_8",
"@cite_2"
],
"mid": [
"2546536770",
"1977295820"
],
"abstract": [
"Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches offer make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more efficiently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms.",
"Detecting and reacting to user behavior and ambient context are core elements of many emerging mobile sensing and Internet-of-Things (IoT) applications. However, extracting accurate inferences from raw sensor data is challenging within the noisy and complex environments where these systems are deployed. Deep Learning -- is one of the most promising approaches for overcoming this challenge, and achieving more robust and reliable inference. Techniques developed within this rapidly evolving area of machine learning are now state-of-the-art for many inference tasks (such as, audio sensing and computer vision) commonly needed by IoT and wearable applications. But currently deep learning algorithms are seldom used in mobile IoT class hardware because they often impose debilitating levels of system overhead (e.g., memory, computation and energy). Efforts to address this barrier to deep learning adoption are slowed by our lack of a systematic understanding of how these algorithms behave at inference time on resource constrained hardware. In this paper, we present the first -- albeit preliminary -- measurement study of common deep learning models (such as Convolutional Neural Networks and Deep Neural Networks) on representative mobile and embedded platforms. The aim of this investigation is to begin to build knowledge of the performance characteristics, resource requirements and the execution bottlenecks for deep learning models when being used to recognize categories of behavior and context. The results and insights of this study, lay an empirical foundation for the development of optimization methods and execution environments that enable deep learning to be more readily integrated into next-generation IoT, smartphones and wearable systems."
]
} |
1709.07592 | 2964318715 | Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to 128 A— 128 resolution for 32 frames. Quantitative and qualitative experiment results demonstrate the superiority of our model over the state-of-the-art models. | A generative adversarial network (GAN) @cite_11 @cite_27 @cite_14 @cite_26 is composed of a generator and a discriminator. The generator tries to fool the discriminator by producing samples similar to real ones, while the discriminator is trained to distinguish the generated samples from the real ones. GANs have been successfully applied to image generation. In the seminal paper @cite_11 , models trained on the MNIST dataset and the Toronto Face Database (TFD), respectively, generate images of digits and faces with high likelihood. Relying only on random noise, GAN cannot control the mode of the generated samples, thus conditional GAN @cite_2 is proposed. Images of digits conditioned on class labels and captions conditioned on image features are generated. Many subsequent works are variants of conditional GAN, including image to image translation @cite_31 @cite_24 , text to image translation @cite_21 and super-resolution @cite_33 . Our model is also a GAN conditioned on a starting image to generate a video. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_21",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_11"
],
"mid": [
"2963148415",
"",
"2963470893",
"2949999304",
"",
"2739748921",
"2125389028",
"2962793481",
""
],
"abstract": [
"Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model's expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.",
"",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"",
"",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
""
]
} |
1709.07592 | 2964318715 | Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to 128 A— 128 resolution for 32 frames. Quantitative and qualitative experiment results demonstrate the superiority of our model over the state-of-the-art models. | Inspired by the coarse-to-fine strategy, multi-stack methods such as StackGAN @cite_17 , LAPGAN @cite_4 have been proposed to first generate coarse images and then refine them to finer images. Our model also employs this strategy to stack GANs in two stages. However, instead of refining the pixel-level details in each frame, the second stage focuses on improving motion dynamics across frames. | {
"cite_N": [
"@cite_4",
"@cite_17"
],
"mid": [
"648143168",
"2964024144"
],
"abstract": [
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach [11]. Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions."
]
} |
1709.07592 | 2964318715 | Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to 128 A— 128 resolution for 32 frames. Quantitative and qualitative experiment results demonstrate the superiority of our model over the state-of-the-art models. | The closest work to ours is @cite_20 , which also generates time-lapse videos. However, there are important differences between their work and ours. First, our method is based on 3D convolution while a recurrent neural network is employed in @cite_20 to recursively generate future frames, which is prone to error accumulation. Second, as modeling motion is indispensable for video generation, we explicitly model motion by introducing the Gram matrix. Finally, we generate high-resolution ( @math ) videos of dynamic scenes, while the generated videos in @cite_20 are simple (usually with clean background) and of resolution 64 @math 64. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2514262209"
],
"abstract": [
"Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models."
]
} |
1709.07634 | 2758074773 | For most state-of-the-art architectures, Rectified Linear Unit (ReLU) becomes a standard component accompanied with each layer. Although ReLU can ease the network training to an extent, the character of blocking negative values may suppress the propagation of useful information and leads to the difficulty of optimizing very deep Convolutional Neural Networks (CNNs). Moreover, stacking layers with nonlinear activations is hard to approximate the intrinsic linear transformations between feature representations. In this paper, we investigate the effect of erasing ReLUs of certain layers and apply it to various representative architectures following deterministic rules. It can ease the optimization and improve the generalization performance for very deep CNN models. We find two key factors being essential to the performance improvement: 1) the location where ReLU should be erased inside the basic module; 2) the proportion of basic modules to erase ReLU; We show that erasing the last ReLU layer of all basic modules in a network usually yields improved performance. In experiments, our approach successfully improves the performance of various representative architectures, and we report the improved results on SVHN, CIFAR-10 100, and ImageNet. Moreover, we achieve competitive single-model performance on CIFAR-100 with 16.53 error rate compared to state-of-the-art. | The nonlinear unit plays an essential role in strengthening the representation ability of a deep neural network. In early years, sigmoid or tanh are standard recipes for building shallow neural networks. Since the rise of deep learning, ReLU @cite_23 has been found to be more powerful in easing the training of deep architectures and contributed a lot to the success of many record-holders @cite_0 @cite_11 @cite_22 @cite_19 @cite_7 . There exist lots of variants of ReLU nowadays, such as Leaky ReLU @cite_20 , PReLU @cite_3 , . The common ground shared by these units is that the computation will be linear on a subset of neurons. Models trained with such kinds of nonlinear units can be viewed as a combination of an exponential number of linear models which share parameters @cite_8 . This inspires us that modeling the local linearity explicitly may be useful. In this paper, we extend this linearity from the to the of neurons in some layer and empirically found that this extension effectively improves the performance of the model. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_20",
"@cite_11"
],
"mid": [
"2949605076",
"2949650786",
"",
"1677182931",
"",
"2274287116",
"",
"",
"1686810756"
],
"abstract": [
"Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset.",
"",
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge",
"",
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
} |
1709.07634 | 2758074773 | For most state-of-the-art architectures, Rectified Linear Unit (ReLU) becomes a standard component accompanied with each layer. Although ReLU can ease the network training to an extent, the character of blocking negative values may suppress the propagation of useful information and leads to the difficulty of optimizing very deep Convolutional Neural Networks (CNNs). Moreover, stacking layers with nonlinear activations is hard to approximate the intrinsic linear transformations between feature representations. In this paper, we investigate the effect of erasing ReLUs of certain layers and apply it to various representative architectures following deterministic rules. It can ease the optimization and improve the generalization performance for very deep CNN models. We find two key factors being essential to the performance improvement: 1) the location where ReLU should be erased inside the basic module; 2) the proportion of basic modules to erase ReLU; We show that erasing the last ReLU layer of all basic modules in a network usually yields improved performance. In experiments, our approach successfully improves the performance of various representative architectures, and we report the improved results on SVHN, CIFAR-10 100, and ImageNet. Moreover, we achieve competitive single-model performance on CIFAR-100 with 16.53 error rate compared to state-of-the-art. | Inception module is first proposed in @cite_5 , which considers the Hebbian and multi-scale principle in CNN. @cite_29 proposes the Batch Normalization (BN) to accelerate the network training. They also applied BN to a new variant of the GoogleNet, named as BN-Inception. @cite_22 proposes several general design principles to improve Inception module which leads a new CNN architecture, Inception-v3. @cite_19 combines the advantages of Inception architectures with residual connections to speedup the CNN training. | {
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_29",
"@cite_22"
],
"mid": [
"2274287116",
"2950179405",
"2949117887",
"2949605076"
],
"abstract": [
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.",
"Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set."
]
} |
1709.07634 | 2758074773 | For most state-of-the-art architectures, Rectified Linear Unit (ReLU) becomes a standard component accompanied with each layer. Although ReLU can ease the network training to an extent, the character of blocking negative values may suppress the propagation of useful information and leads to the difficulty of optimizing very deep Convolutional Neural Networks (CNNs). Moreover, stacking layers with nonlinear activations is hard to approximate the intrinsic linear transformations between feature representations. In this paper, we investigate the effect of erasing ReLUs of certain layers and apply it to various representative architectures following deterministic rules. It can ease the optimization and improve the generalization performance for very deep CNN models. We find two key factors being essential to the performance improvement: 1) the location where ReLU should be erased inside the basic module; 2) the proportion of basic modules to erase ReLU; We show that erasing the last ReLU layer of all basic modules in a network usually yields improved performance. In experiments, our approach successfully improves the performance of various representative architectures, and we report the improved results on SVHN, CIFAR-10 100, and ImageNet. Moreover, we achieve competitive single-model performance on CIFAR-100 with 16.53 error rate compared to state-of-the-art. | Highway network @cite_24 is designed to ease gradient-based training of very deep networks. @cite_7 proposes the deep residual network that achieves a remarkable breakthrough in ImageNet classification and won the 1st places various of ImageNet and COCO competitions. The proposed residual learning can make the network easier to optimize and gain accuracy from considerably increased depth. Following ResNet, @cite_10 proposes the Pre-activation ResNet improving the ResNet by using pre-activation block. Wide ResNet @cite_15 decreases the depth and increases the width of residual networks. It tackles the problem of diminishing feature reuse for training very deep residual networks. ResNeXt @cite_27 optimizes the convolution layer in ResNet by aggregating a set of transformations with the same topology. | {
"cite_N": [
"@cite_7",
"@cite_24",
"@cite_27",
"@cite_15",
"@cite_10"
],
"mid": [
"2949650786",
"2950621961",
"2953328958",
"2401231614",
"2302255633"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.",
"We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.",
"Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at this https URL",
"Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62 error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https: github.com KaimingHe resnet-1k-layers."
]
} |
1709.07634 | 2758074773 | For most state-of-the-art architectures, Rectified Linear Unit (ReLU) becomes a standard component accompanied with each layer. Although ReLU can ease the network training to an extent, the character of blocking negative values may suppress the propagation of useful information and leads to the difficulty of optimizing very deep Convolutional Neural Networks (CNNs). Moreover, stacking layers with nonlinear activations is hard to approximate the intrinsic linear transformations between feature representations. In this paper, we investigate the effect of erasing ReLUs of certain layers and apply it to various representative architectures following deterministic rules. It can ease the optimization and improve the generalization performance for very deep CNN models. We find two key factors being essential to the performance improvement: 1) the location where ReLU should be erased inside the basic module; 2) the proportion of basic modules to erase ReLU; We show that erasing the last ReLU layer of all basic modules in a network usually yields improved performance. In experiments, our approach successfully improves the performance of various representative architectures, and we report the improved results on SVHN, CIFAR-10 100, and ImageNet. Moreover, we achieve competitive single-model performance on CIFAR-100 with 16.53 error rate compared to state-of-the-art. | Some researchers incorporate the stochastic procedure in the CNN models. @cite_13 proposes stochastic depth, a training procedure enabling the seemingly contradictory setup to train short networks and use deep networks at test time. @cite_17 proposes to use parallel branches with a stochastic affine combination in ResNet to the avoid overfitting problem. Our approach is a different way to improve CNN models compared with them, and we can complement each other. While a contemporary work @cite_2 argues that 1:1 convolution and ReLU ratio is not the best choice to design the network architectures, we observe that only tuning the convolution and ReLU ratio may not always lead to improvement for different network structures. Instead, the location where ReLUs should be erased is the key factor, which is the focus of this paper. | {
"cite_N": [
"@cite_2",
"@cite_13",
"@cite_17"
],
"mid": [
"2754989491",
"2949892913",
""
],
"abstract": [
"With the rapid development of Deep Convolutional Neural Networks (DCNNs), numerous works focus on designing better network architectures (i.e., AlexNet, VGG, Inception, ResNet and DenseNet etc.). Nevertheless, all these networks have the same characteristic: each convolutional layer is followed by an activation layer, a Rectified Linear Unit (ReLU) layer is the most used among them. In this work, we argue that the paired module with 1:1 convolution and ReLU ratio is not the best choice since it may result in poor generalization ability. Thus, we try to investigate the more suitable convolution and ReLU ratio for exploring the better network architectures. Specifically, inspired by Leaky ReLU, we focus on adopting the proportional module with N:M (N @math M) convolution and ReLU ratio to design the better networks. From the perspective of ensemble learning, Leaky ReLU can be considered as an ensemble of networks with different convolution and ReLU ratio. We find that the proportional module with N:M (N @math M) convolution and ReLU ratio can help networks acquire the better performance, through the analysis of a simple Leaky ReLU model. By utilizing the proportional module with N:M (N @math M) convolution and ReLU ratio, many popular networks can form more rich representations in models, since the N:M (N @math M) proportional module can utilize information more effectively. Furthermore, we apply this module in diverse DCNN models to explore whether is the N:M (N @math M) convolution and ReLU ratio indeed more effective. From our experimental results, we can find that such a simple yet effective method achieves better performance in different benchmarks with various network architectures and the experimental results verify that the superiority of the proportional module.",
"Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10).",
""
]
} |
1709.07401 | 2755289116 | We study a unique network dataset including periodic surveys and electronic logs of dyadic contacts via smartphones. The participants were a sample of freshmen entering university in the Fall 2011. Their opinions on a variety of political and social issues and lists of activities on campus were regularly recorded at the beginning and end of each semester for the first three years of study. We identify a behavioral network defined by call and text data, and a cognitive network based on friendship nominations in ego-network surveys. Both networks are limited to study participants. Since a wide range of attributes on each node were collected in self-reports, we refer to these networks as attribute-rich networks. We study whether student preferences for certain attributes of friends can predict formation and dissolution of edges in both networks. We introduce a method for computing student preferences for different attributes which we use to predict link formation and dissolution. We then rank these attributes according to their importance for making predictions. We find that personal preferences, in particular political views, and preferences for common activities help predict link formation and dissolution in both the behavioral and cognitive networks. | Link prediction is a well-studied topic. The standard techniques for link prediction have been mentioned in @cite_14 and @cite_26 . Most of the experiments there are on collaboration networks between researchers. However, none of the networks in these papers are as rich in node attributes as NetSense. We study how homophily in terms of node attributes affects link prediction in @cite_27 . While we were able to get reasonable results, the innovative proposed in this paper improves the quality of link prediction in NetSense. | {
"cite_N": [
"@cite_27",
"@cite_14",
"@cite_26"
],
"mid": [
"2949171182",
"",
"2768375068"
],
"abstract": [
"We study a unique behavioral network data set (based on periodic surveys and on electronic logs of dyadic contact via smartphones) collected at the University of Notre Dame.The participants are a sample of members of the entering class of freshmen in the fall of 2011 whose opinions on a wide variety of political and social issues and activities on campus were regularly recorded - at the beginning and end of each semester - for the first three years of their residence on campus. We create a communication activity network implied by call and text data, and a friendship network based on surveys. Both networks are limited to students participating in the NetSense surveys. We aim at finding student traits and activities on which agreements correlate well with formation and persistence of links while disagreements are highly correlated with non-existence or dissolution of links in the two social networks that we created. Using statistical analysis and machine learning, we observe several traits and activities displaying such correlations, thus being of potential use to predict social network evolution.",
"",
"Social network analysis has attracted much attention in recent years. Link prediction is a key research directions within this area. In this research, we study link prediction as a supervised learning task. Along the way, we identify a set of features that are key to the superior performance under the supervised learning setup. The identified features are very easy to compute, and at the same time surprisingly effective in solving the link prediction problem. We also explain the effectiveness of the features from their class density distribution. Then we compare different classes of supervised learning algorithms in terms of their prediction performance using various performance metrics, such as accuracy, precision-recall, F-values, squared error etc. with a 5-fold cross validation. Our results on two practical social network datasets shows that most of the well-known classification algorithms (decision tree, k-nn,multilayer perceptron, SVM, rbf network) can predict link with surpassing performances, but SVM defeats all of them with narrow margin in all different performance measures. Again, ranking of features with popular feature ranking algorithms shows that a small subset of features always plays a significant role in the link prediction job."
]
} |
1709.07417 | 2619307294 | We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system. | Neural networks are difficult and slow to train, and many methods have been designed to reduce this difficulty (e.g., ). More recent optimization methods combine insights from both stochastic and batch methods in that they use a small minibatch, similar to SGD, but implement many heuristics to estimate diagonal second-order information, similar to Hessian-free or L-BFGS @cite_9 . This combination often yields faster convergence for practical problems @cite_15 @cite_5 @cite_23 . For example, Adam @cite_23 , a commonly-used optimizer in deep learning, implements simple heuristics to estimate the mean and variance of the gradient, which are used to generate more stable updates during training. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_23",
"@cite_15"
],
"mid": [
"2168231600",
"2051434435",
"1522301498",
"2146502635"
],
"abstract": [
"Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.",
"We study the numerical performance of a limited memory quasi-Newton method for large scale optimization, which we call the L-BFGS method. We compare its performance with that of the method developed by Buckley and LeNir (1985), which combines cycles of BFGS steps and conjugate direction steps. Our numerical tests indicate that the L-BFGS method is faster than the method of Buckley and LeNir, and is better able to use additional storage to accelerate convergence. We show that the L-BFGS method can be greatly accelerated by means of a simple scaling. We then compare the L-BFGS method with the partitioned quasi-Newton method of Griewank and Toint (1982a). The results show that, for some problems, the partitioned quasi-Newton method is clearly superior to the L-BFGS method. However we find that for other problems the L-BFGS method is very competitive due to its low iteration cost. We also study the convergence properties of the L-BFGS method, and prove global convergence on uniformly convex problems.",
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.",
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms."
]
} |
1709.07417 | 2619307294 | We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system. | Many of the above update rules are designed by borrowing ideas from convex analysis, even though optimization problems in neural networks are non-convex. Recent empirical results with non-monotonic learning rate heuristics @cite_4 suggest that there are still many unknowns in training neural networks and that many ideas in non-convex optimization can be used to improve it. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2518108298"
],
"abstract": [
"Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on CIFAR-10 and CIFAR-100 datasets where we demonstrate new state-of-the-art results below 4 and 19 , respectively. Our source code is available at this https URL"
]
} |
1709.07417 | 2619307294 | We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system. | The concept of using a Recurrent Neural Network for meta-learning has been attempted in the past, either via genetic programming or gradient descent @cite_13 @cite_16 . Similar to the above recent methods, these approaches only generate the updates, but not the update equations, as proposed in this paper. | {
"cite_N": [
"@cite_16",
"@cite_13"
],
"mid": [
"2263490141",
"1549134978"
],
"abstract": [
"Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.",
"In previous work we explained how to use standard optimization methods such as simulated annealing, gradient descent and genetic algorithms to optimize a parametric function which could be used as a learning rule for neural networks. To use these methods, we had to choose a fixed number of parameters and a rigid form for the learning rule. In this article, we propose to use genetic programming to find not only the values of rule parameters but also the optimal number of parameters and the form of the rule. Experiments on classification tasks suggest genetic programming finds better learning rules than other optimization methods. Furthermore, the best rule found with genetic programming outperformed the well-known backpropagation algorithm for a given set of tasks. >"
]
} |
1709.07417 | 2619307294 | We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system. | Our approach is reminiscent of recent work in automated model discovery with Reinforcement Learning @cite_1 , especially Neural Architecture Search @cite_25 , in which a recurrent network is used to generate the configuration string of neural architectures instead. In addition to applying the key ideas to different applications, this work presents a novel scheme to combine primitive inputs in a much more flexible manner, which makes the search for novel optimizers possible. | {
"cite_N": [
"@cite_1",
"@cite_25"
],
"mid": [
"2951886768",
"2553303224"
],
"abstract": [
"At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using @math -learning with an @math -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.",
"Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214."
]
} |
1709.07220 | 2951731514 | In this paper, we address the problem of estimating the positions of human joints, i.e., articulated pose estimation. Recent state-of-the-art solutions model two key issues, joint detection and spatial configuration refinement, together using convolutional neural networks. Our work mainly focuses on spatial configuration refinement by reducing variations of human poses statistically, which is motivated by the observation that the scattered distribution of the relative locations of joints e.g., the left wrist is distributed nearly uniformly in a circular area around the left shoulder) makes the learning of convolutional spatial models hard. We present a two-stage normalization scheme, human body normalization and limb normalization, to make the distribution of the relative joint locations compact, resulting in easier learning of convolutional spatial models and more accurate pose estimation. In addition, our empirical results show that incorporating multi-scale supervision and multi-scale fusion into the joint detection network is beneficial. Experiment results demonstrate that our method consistently outperforms state-of-the-art methods on the benchmarks. | Many recent works use convolutional neural networks to learn feature representations for obtaining the score maps of joints or the locations of joints @cite_34 @cite_2 @cite_25 @cite_14 @cite_37 @cite_11 @cite_23 . Some methods directly employ learned feature representations to regress joint positions, , the DeepPose method @cite_34 . A more typical way of joint detection is to estimate a score map for each joint based on the fully convolutional neural network (FCN) @cite_38 . The estimation procedure can be formulated as a multi-class classification problem @cite_14 @cite_37 or regression problem @cite_25 @cite_30 . For the multi-class formulation, either a single-label based loss ( , softmax cross-entropy loss) @cite_50 or a multi-label based loss ( , sigmoid cross-entropy loss) @cite_28 can be used. One main problem for the FCN-based joint detection model is that the positions of joints are estimated from low resolution score maps. This reduces the location accuracy of the joints. In our work, we introduce multi-scale supervision and fusion to further improve performance with gradual up-sampling. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_37",
"@cite_14",
"@cite_28",
"@cite_23",
"@cite_2",
"@cite_50",
"@cite_34",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2952632681",
"2950762923",
"2255781698",
"2951256101",
"2518965973",
"2949447708",
"2330154883",
"2113325037",
"1537698211",
""
],
"abstract": [
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.",
"This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation. Models and code available at this http URL.",
"This paper is on human pose estimation using Convolutional Neural Networks. Our main contribution is a CNN cascaded architecture specifically designed for learning part relationships and spatial context, and robustly inferring pose even for the case of severe part occlusions. To this end, we propose a detection-followed-by-regression CNN cascade. The first part of our cascade outputs part detection heatmaps and the second part performs regression on these heatmaps. The benefits of the proposed architecture are multi-fold: It guides the network where to focus in the image and effectively encodes part constraints and context. More importantly, it can effectively cope with occlusions because part detection heatmaps for occluded parts provide low confidence scores which subsequently guide the regression part of our network to rely on contextual information in order to predict the location of these parts. Additionally, we show that the proposed cascade is flexible enough to readily allow the integration of various CNN architectures for both detection and regression, including recent ones based on residual learning. Finally, we illustrate that our cascade achieves top performance on the MPII and LSP data sets. Code can be downloaded from http: www.cs.nott.ac.uk psxab5 .",
"We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"In this paper, we propose a structured feature learning framework to reason the correlations among body joints at the feature level in human pose estimation. Different from existing approaches of modeling structures on score maps or predicted labels, feature maps preserve substantially richer descriptions of body joints. The relationships between feature maps of joints are captured with the introduced geometrical transform kernels, which can be easily implemented with a convolution layer. Features and their relationships are jointly learned in an end-to-end learning system. A bi-directional tree structured model is proposed, so that the feature channels at a body joint can well receive information from other joints. The proposed framework improves feature learning substantially. With very simple post processing, it reaches the best mean PCP on the LSP and FLIC datasets. Compared with the baseline of learning features at each joint separately with ConvNet, the mean PCP has been improved by 18 on FLIC. The code is released to the public. 1",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.",
""
]
} |
1709.07223 | 2760005823 | Deep learning algorithms offer a powerful means to automatically analyze the content of medical images. However, many biological samples of interest are primarily transparent to visible light and contain features that are difficult to resolve with a standard optical microscope. Here, we use a convolutional neural network (CNN) not only to classify images, but also to optimize the physical layout of the imaging device itself. We increase the classification accuracy of a microscope's recorded images by merging an optical model of image formation into the pipeline of a CNN. The resulting network simultaneously determines an ideal illumination arrangement to highlight important sample features during image acquisition, along with a set of convolutional weights to classify the detected images post-capture. We demonstrate our joint optimization technique with an experimental microscope configuration that automatically identifies malaria-infected cells with 5-10 higher accuracy than standard and alternative microscope lighting designs. | Deep learning networks for conventional @cite_4 and biomedical @cite_1 image classification are now commonplace and much of this recent work relies on CNNs. However, as noted above, most studies in this area do not try to use deep learning to optimize the acquisition process for their image data sets. While early work has shown that simple neural networks offer an effective way to design cameras @cite_17 , one of the first works to consider this question in the context of CNN-based learning was presented recently by Chakrabarti @cite_5 , who designed an optimal pixel-level color filter layout for color image reconstruction that outperformed the standard Bayer filter pattern. | {
"cite_N": [
"@cite_1",
"@cite_5",
"@cite_4",
"@cite_17"
],
"mid": [
"2592929672",
"2404325329",
"",
"2009950757"
],
"abstract": [
"Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskelet al. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.",
"Recent progress on many imaging and vision tasks has been driven by the use of deep feed-forward neural networks, which are trained by propagating gradients of a loss defined on the final output, back through the network up to the first layer that operates directly on the image. We propose back-propagating one step further---to learn camera sensor designs jointly with networks that carry out inference on the images they capture. In this paper, we specifically consider the design and inference problems in a typical color camera---where the sensor is able to measure only one color channel at each pixel location, and computational inference is required to reconstruct a full color image. We learn the camera sensor's color multiplexing pattern by encoding it as layer whose learnable weights determine which color channel, from among a fixed set, will be measured at each location. These weights are jointly trained with those of a reconstruction network that operates on the corresponding sensor measurements to produce a full color image. Our network achieves significant improvements in accuracy over the traditional Bayer pattern used in most color cameras. It automatically learns to employ a sparse color measurement approach similar to that of a recent design, and moreover, improves upon that design by learning an optimal layout for these measurements.",
"",
"The graded-response Hopfield neural network model has been used to solve the traveling salesman optimization problem. However, the mapping of an optical design optimization problem onto a neural net is more difficult. This paper describes how it can be done for the case of minimizing the chromatic aberration in a complicated twenty-element zoom-lens system by the selection of glass types. The problem is combinatorial in nature. It is suited to neural networks, and its solution is non-trivial by other means. Thus the use of neural networks to solve optical optimization problems is demonstrated.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only."
]
} |
1709.07223 | 2760005823 | Deep learning algorithms offer a powerful means to automatically analyze the content of medical images. However, many biological samples of interest are primarily transparent to visible light and contain features that are difficult to resolve with a standard optical microscope. Here, we use a convolutional neural network (CNN) not only to classify images, but also to optimize the physical layout of the imaging device itself. We increase the classification accuracy of a microscope's recorded images by merging an optical model of image formation into the pipeline of a CNN. The resulting network simultaneously determines an ideal illumination arrangement to highlight important sample features during image acquisition, along with a set of convolutional weights to classify the detected images post-capture. We demonstrate our joint optimization technique with an experimental microscope configuration that automatically identifies malaria-infected cells with 5-10 higher accuracy than standard and alternative microscope lighting designs. | Subsequent work has also considered using CNNs to overcome the effects of camera sensor noise @cite_13 as well as optical scattering @cite_10 @cite_0 . CNNs paired with non-conventional camera architectures can additionally achieve light-field imaging @cite_18 @cite_27 and compressive image measurement @cite_15 . Alternative imaging setups have also been coupled with supervised learning methods for cell imaging and classification @cite_8 @cite_20 . The above works are some of the first to consider how the performance of supervised learning in general and CNNs in particular connect to the exact procedure for optical data acquisition. But, with the except of Chakrabarti's work, they do not directly consider using the CNN itself to optimize the detection process of an optical (or digital) device. | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_8",
"@cite_0",
"@cite_27",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2581510845",
"2964195221",
"2520674406",
"",
"",
"2950577421",
"2428993818",
""
],
"abstract": [
"Real-world sensors suffer from noise, blur, and other imperfections that make high-level computer vision tasks like scene segmentation, tracking, and scene understanding difficult. Making high-level computer vision networks robust is imperative for real-world applications like autonomous driving, robotics, and surveillance. We propose a novel end-to-end differentiable architecture for joint denoising, deblurring, and classification that makes classification robust to realistic noise and blur. The proposed architecture dramatically improves the accuracy of a classification network in low light and other challenging conditions, outperforming alternative approaches such as retraining the network on noisy and blurry images and preprocessing raw sensor inputs with conventional denoising and deblurring algorithms. The architecture learns denoising and deblurring pipelines optimized for classification whose outputs differ markedly from those of state-of-the-art denoising and deblurring methods, preserving fine detail at the cost of more noise and artifacts. Our results suggest that the best low-level image processing for computer vision is different from existing algorithms designed to produce visually pleasing images. The principles used to design the proposed architecture easily extend to other high-level computer vision tasks and image formation models, providing a general framework for integrating low-level and high-level image processing.",
"Deep learning using convolutional neural networks (CNNs) is quickly becoming the state-of-the-art for challenging computer vision applications. However, deep learning's power consumption and bandwidth requirements currently limit its application in embedded and mobile systems with tight energy budgets. In this paper, we explore the energy savings of optically computing the first layer of CNNs. To do so, we utilize bio-inspired Angle Sensitive Pixels (ASPs), custom CMOS diffractive image sensors which act similar to Gabor filter banks in the V1 layer of the human visual cortex. ASPs replace both image sensing and the first layer of a conventional CNN by directly performing optical edge filtering, saving sensing energy, data bandwidth, and CNN FLOPS to compute. Our experimental results (both on synthetic data and a hardware prototype) for a variety of vision tasks such as digit recognition, object recognition, and face identification demonstrate 97 reduction in image sensor power consumption and 90 reduction in data bandwidth from sensor to CPU, while achieving similar performance compared to traditional deep learning pipelines.",
"Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7 in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5 ) than either LDC (99.0 ) or LR (99.1 ) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98 . Discrimination of infection stage was less accurate, producing high specificity (99.8 ) but only 45.0 -66.8 sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis.",
"",
"",
"The goal of this paper is to present a non-iterative and more importantly an extremely fast algorithm to reconstruct images from compressively sensed (CS) random measurements. To this end, we propose a novel convolutional neural network (CNN) architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction. We call this network, ReconNet. The intermediate reconstruction is fed into an off-the-shelf denoiser to obtain the final reconstructed image. On a standard dataset of images we show significant improvements in reconstruction results (both in terms of PSNR and time complexity) over state-of-the-art iterative CS reconstruction algorithms at various measurement rates. Further, through qualitative experiments on real data collected using our block single pixel camera (SPC), we show that our network is highly robust to sensor noise and can recover visually better quality images than competitive algorithms at extremely low sensing rates of 0.1 and 0.04. To demonstrate that our algorithm can recover semantically informative images even at a low measurement rate of 0.01, we present a very robust proof of concept real-time visual tracking application.",
"We present a machine-learning-based method for single-shot imaging through scattering media. The inverse scattering process was calculated based on a nonlinear regression algorithm by learning a number of training object-speckle pairs. In the experimental demonstration, multilayer phase objects between scattering plates were reconstructed from intensity measurements. Our approach enables model-free sensing, where it is not necessary to know the sensing processes models.",
""
]
} |
1709.07223 | 2760005823 | Deep learning algorithms offer a powerful means to automatically analyze the content of medical images. However, many biological samples of interest are primarily transparent to visible light and contain features that are difficult to resolve with a standard optical microscope. Here, we use a convolutional neural network (CNN) not only to classify images, but also to optimize the physical layout of the imaging device itself. We increase the classification accuracy of a microscope's recorded images by merging an optical model of image formation into the pipeline of a CNN. The resulting network simultaneously determines an ideal illumination arrangement to highlight important sample features during image acquisition, along with a set of convolutional weights to classify the detected images post-capture. We demonstrate our joint optimization technique with an experimental microscope configuration that automatically identifies malaria-infected cells with 5-10 higher accuracy than standard and alternative microscope lighting designs. | In this work, we first merge a general model of optical image formation into the first layers of a CNN. After presenting our "physical CNN" model, we then present simulations and experimental results for improved classification of cells from microscopic images. We focus on the particular task of classifying if red blood cells are infected with the Plasmodium falciparum parasite, which is the most clinically relevant and deadly cause of malaria. This goal has been examined within the context of machine learning @cite_8 @cite_23 and CNNs @cite_3 in the past. Unlike prior work, we demonstrate how our physical CNN can simultaneously predict an optimal way to illuminate each red blood cell. When used to capture experimental images, this optimized illumination pattern produces better classification scores than tested alternatives, which hopefully can be integrated into future malaria diagnostic tools. | {
"cite_N": [
"@cite_3",
"@cite_23",
"@cite_8"
],
"mid": [
"2521732732",
"2005909996",
"2520674406"
],
"abstract": [
"Point of care diagnostics using microscopy and computer vision methods have been applied to a number of practical problems, and are particularly relevant to low-income, high disease burden areas. However, this is subject to the limitations in sensitivity and specificity of the computer vision methods used. In general, deep learning has recently revolutionised the field of computer vision, in some cases surpassing human performance for other object recognition tasks. In this paper, we evaluate the performance of deep convolutional neural networks on three different microscopy tasks: diagnosis of malaria in thick blood smears, tuberculosis in sputum samples, and intestinal parasite eggs in stool samples. In all cases accuracy is very high and substantially better than an alternative approach more representative of traditional medical imaging techniques.",
"Abstract The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria ( Plasmodium vivax and Plasmodium falciparum ) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F -statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84 for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5 with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification.",
"Malaria detection through microscopic examination of stained blood smears is a diagnostic challenge that heavily relies on the expertise of trained microscopists. This paper presents an automated analysis method for detection and staging of red blood cells infected by the malaria parasite Plasmodium falciparum at trophozoite or schizont stage. Unlike previous efforts in this area, this study uses quantitative phase images of unstained cells. Erythrocytes are automatically segmented using thresholds of optical phase and refocused to enable quantitative comparison of phase images. Refocused images are analyzed to extract 23 morphological descriptors based on the phase information. While all individual descriptors are highly statistically different between infected and uninfected cells, each descriptor does not enable separation of populations at a level satisfactory for clinical utility. To improve the diagnostic capacity, we applied various machine learning techniques, including linear discriminant classification (LDC), logistic regression (LR), and k-nearest neighbor classification (NNC), to formulate algorithms that combine all of the calculated physical parameters to distinguish cells more effectively. Results show that LDC provides the highest accuracy of up to 99.7 in detecting schizont stage infected cells compared to uninfected RBCs. NNC showed slightly better accuracy (99.5 ) than either LDC (99.0 ) or LR (99.1 ) for discriminating late trophozoites from uninfected RBCs. However, for early trophozoites, LDC produced the best accuracy of 98 . Discrimination of infection stage was less accurate, producing high specificity (99.8 ) but only 45.0 -66.8 sensitivity with early trophozoites most often mistaken for late trophozoite or schizont stage and late trophozoite and schizont stage most often confused for each other. Overall, this methodology points to a significant clinical potential of using quantitative phase imaging to detect and stage malaria infection without staining or expert analysis."
]
} |
1709.07114 | 2759716989 | Decentralized receding horizon control (D-RHC) provides a mechanism for coordination in multi-agent settings without a centralized command center. However, combining a set of different goals, costs, and constraints to form an efficient optimization objective for D-RHC can be difficult. To allay this problem, we use a meta-learning process -- cost adaptation -- which generates the optimization objective for D-RHC to solve based on a set of human-generated priors (cost and constraint functions) and an auxiliary heuristic. We use this adaptive D-RHC method for control of mesh-networked swarm agents. This formulation allows a wide range of tasks to be encoded and can account for network delays, heterogeneous capabilities, and increasingly large swarms through the adaptation mechanism. We leverage the Unity3D game engine to build a simulator capable of introducing artificial networking failures and delays in the swarm. Using the simulator we validate our method on an example coordinated exploration task. We demonstrate that cost adaptation allows for more efficient and safer task completion under varying environment conditions and increasingly large swarm sizes. We release our simulator and code to the community for future work. | Recent work has also used meta-learning to generate a loss function in deep learning settings for classification tasks @cite_1 . We add to this expanding research area and learn the D-RHC objective from a combination of human priors (pre-generated cost and constraint functions). | {
"cite_N": [
"@cite_1"
],
"mid": [
"2786471719"
],
"abstract": [
"Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine . In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach learning to teach''. In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding)."
]
} |
1709.07077 | 2759846780 | We consider image classification with estimated depth. This problem falls into the domain of transfer learning, since we are using a model trained on a set of depth images to generate depth maps (additional features) for use in another classification problem using another disjoint set of images. It's challenging as no direct depth information is provided. Though depth estimation has been well studied, none have attempted to aid image classification with estimated depth. Therefore, we present a way of transferring domain knowledge on depth estimation to a separate image classification task over a disjoint set of train, and test data. We build a RGBD dataset based on RGB dataset and do image classification on it. Then evaluation the performance of neural networks on the RGBD dataset compared to the RGB dataset. From our experiments, the benefit is significant with shallow and deep networks. It improves ResNet-20 by 0.55 and ResNet-56 by 0.53 . Our code and dataset are available publicly. | CNN have been applied with great success for object classification @cite_11 @cite_8 @cite_0 @cite_17 @cite_15 and detection @cite_14 @cite_9 @cite_12 . CNN have recently been applied to a variety of other tasks, like depth estimation. Depth estimation from single image is well addressed by Liu al @cite_5 and Eigen al @cite_19 . They both agree that depth estimation is an ill-posed problem, since there's no real ground truth depth map. We define transfer learning accuracy metric for depth estimation model (). It becomes easier to compare performance of different depth estimation model. | {
"cite_N": [
"@cite_11",
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"",
"1686810756",
"2953106684",
"2950179405",
"2951713345",
"",
"2135254996",
"2949295283",
"2949650786"
],
"abstract": [
"",
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
"",
"An important step in the development of dependable systems is the validation of their fault tolerance properties. Fault injection has been widely used for this purpose, however with the rapid increase in processor complexity, traditional techniques are also increasingly more difficult to apply. This paper presents a new software-implemented fault injection and monitoring environment, called Xception, which is targeted at modern and complex processors. Xception uses the advanced debugging and performance monitoring features existing in most modern processors to inject quite realistic faults by software, and to monitor the activation of the faults and their impact on the target system behavior in detail. Faults are injected with minimum interference with the target application. The target application is not modified, no software traps are inserted, and it is not necessary to execute the target application in special trace mode (the application is executed at full speed). Xception provides a comprehensive set of fault triggers, including spatial and temporal fault triggers, and triggers related to the manipulation of data in memory. Faults injected by Xception can affect any process running on the target system (including the kernel), and it is possible to inject faults in applications for which the source code is not available. Experimental, results are presented to demonstrate the accuracy and potential of Xception in the evaluation of the dependability properties of the complex computer systems available nowadays.",
"Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
} |
1709.07077 | 2759846780 | We consider image classification with estimated depth. This problem falls into the domain of transfer learning, since we are using a model trained on a set of depth images to generate depth maps (additional features) for use in another classification problem using another disjoint set of images. It's challenging as no direct depth information is provided. Though depth estimation has been well studied, none have attempted to aid image classification with estimated depth. Therefore, we present a way of transferring domain knowledge on depth estimation to a separate image classification task over a disjoint set of train, and test data. We build a RGBD dataset based on RGB dataset and do image classification on it. Then evaluation the performance of neural networks on the RGBD dataset compared to the RGB dataset. From our experiments, the benefit is significant with shallow and deep networks. It improves ResNet-20 by 0.55 and ResNet-56 by 0.53 . Our code and dataset are available publicly. | There are already many successful transfer learning results in Computer Vision. A popular one is transfer ImageNet @cite_7 Classification Network like VGG-16 @cite_8 to object detection @cite_9 . Another example also in object detection is contextualized networks @cite_18 , usually through multi scale context. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_7",
"@cite_8"
],
"mid": [
"2953106684",
"2963093690",
"2108598243",
"1686810756"
],
"abstract": [
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"Modern deep neural network-based object detection methods typically classify candidate proposals using their interior features. However, global and local surrounding contexts that are believed to be valuable for object detection are not fully exploited by existing methods yet. In this work, we take a step towards understanding what is a robust practice to extract and utilize contextual information to facilitate object detection in practice. Specifically, we consider the following two questions: “how to identify useful global contextual information for detecting a certain object?” and “how to exploit local context surrounding a proposal for better inferring its contents?” We provide preliminary answers to these questions through developing a novel attention to context convolution neural network (AC-CNN)-based object detection model. AC-CNN effectively incorporates global and local contextual information into the region-based CNN (e.g., fast R-CNN and faster R-CNN) detection framework and provides better object detection performance. It consists of one attention-based global contextualized (AGC) subnetwork and one multi-scale local contextualized (MLC) subnetwork. To capture global context, the AGC subnetwork recurrently generates an attention map for an input image to highlight useful global contextual locations, through multiple stacked long short-term memory layers. For capturing surrounding local context, the MLC subnetwork exploits both the inside and outside contextual information of each specific proposal at multiple scales. The global and local context are then fused together for making the final decision for detection. Extensive experiments on PASCAL VOC 2007 and VOC 2012 well demonstrate the superiority of the proposed AC-CNN over well-established baselines.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
} |
1709.07109 | 2756946152 | A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting. | The proposed framework is closely related to recent research on incorporating NVI into text modeling @cite_32 @cite_16 @cite_22 @cite_34 @cite_33 . @cite_32 presented the first attempt to utilize NVI for language modeling, but their results using an LSTM decoder were largely negative. @cite_16 applied the NVI framework to an unsupervised bags-of-words model. However, from the perspective of text representation learning, their model ignores word-order information, which may be suboptimal for downstream supervised tasks. @cite_22 employed a variational autoencoder with the LSTM-LSTM architecture for semi-supervised sentence classification. However, as illustrated in our experiments, as well as in @cite_20 , the LSTM decoder is not the most effective choice for learning informative and discriminative sentence embeddings. | {
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_32",
"@cite_34",
"@cite_16",
"@cite_20"
],
"mid": [
"",
"2399880602",
"2963223306",
"2394571815",
"2173681125",
"2963600562"
],
"abstract": [
"",
"Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.",
"",
"Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence. In this paper, we propose a variational model to learn this conditional distribution for neural machine translation: a variational encoderdecoder model that can be trained end-to-end. Different from the vanilla encoder-decoder model that generates target translations from hidden representations of source sentences alone, the variational model introduces a continuous latent variable to explicitly model underlying semantics of source sentences and to guide the generation of target translations. In order to perform efficient posterior inference and large-scale training, we build a neural posterior approximator conditioned on both the source and the target sides, and equip it with a reparameterization technique to estimate the variational lower bound. Experiments on both Chinese-English and English- German translation tasks show that the proposed variational neural machine translation achieves significant improvements over the vanilla neural machine translation baselines.",
"Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bag-of-words generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.",
"Recent work on generative text modeling has found that variational autoencoders (VAE) with LSTM decoders perform worse than simpler LSTM language models (, 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder's dilation architecture, we control the size of context from previously generated words. In experiments, we find that there is a trade-off between contextual capacity of the decoder and effective use of encoding information. We show that when carefully managed, VAEs can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive language modeling result with VAE. Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines."
]
} |
1709.07109 | 2756946152 | A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting. | The NVI framework has also been employed for text-generation problems, such as machine translation @cite_34 and dialogue generation @cite_33 , with the motivation to improve the diversity and controllability of generated sentences. Our work is distinguished from this prior research in two principal respects: () We leveraged the NVI framework for latent variable models to text sequence matching tasks, due to its ability to take advantage of unlabeled data and learn robust sentence embeddings; () we employed deconvolutional networks, instead of the LSTM, as the decoder (generative) network. We demonstrated the effectiveness of our framework in both unsupervised and supervised (including semi-supervised) learning cases. | {
"cite_N": [
"@cite_34",
"@cite_33"
],
"mid": [
"2394571815",
"2399880602"
],
"abstract": [
"Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence. In this paper, we propose a variational model to learn this conditional distribution for neural machine translation: a variational encoderdecoder model that can be trained end-to-end. Different from the vanilla encoder-decoder model that generates target translations from hidden representations of source sentences alone, the variational model introduces a continuous latent variable to explicitly model underlying semantics of source sentences and to guide the generation of target translations. In order to perform efficient posterior inference and large-scale training, we build a neural posterior approximator conditioned on both the source and the target sides, and equip it with a reparameterization technique to estimate the variational lower bound. Experiments on both Chinese-English and English- German translation tasks show that the proposed variational neural machine translation achieves significant improvements over the vanilla neural machine translation baselines.",
"Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context."
]
} |
1709.07078 | 2937802149 | 5G mobile networks are expected to provide pervasive high speed wireless connectivity, to support increasingly resource intensive user applications. Network hyper-densification therefore becomes necessary, though connecting to the Internet tens of thousands of base stations is non-trivial, especially in urban scenarios where optical fibre is difficult and costly to deploy. The millimetre wave (mm-wave) spectrum is a promising candidate for inexpensive multi-Gbps wireless backhauling, but exploiting this band for effective multi-hop data communications is challenging. In particular, resource allocation and scheduling of very narrow transmission reception beams requires to overcome terminal deafness and link blockage problems, while managing fairness issues that arise when flows encounter dissimilar competition and traverse different numbers of links with heterogeneous quality. In this paper, we propose WiHaul, an airtime allocation and scheduling mechanism that overcomes these challenges specific to multi-hop mm-wave networks, guarantees max-min fairness among traffic flows, and ensures the overall available backhaul resources are fully utilised. We evaluate the proposed WiHaul scheme over a broad range of practical network conditions, and demonstrate up to 5 times individual throughput gains and a fivefold improvement in terms of measurable fairness, over recent mm-wave scheduling solutions. | Recent empirical studies confirm the millimetre-wave band (30--300GHz) will be able to support multi-Gbps link rates @cite_19 . Hence, it becomes a promising candidate to accommodate bandwidth intensive small-cell wireless backhauling solutions @cite_22 . Channel measurement efforts also confirm that beamforming necessary to mitigate attenuation in mm-wave bands drastically reduces interference, and links can often be regarded as pseudo-wired @cite_54 . Wang propose a code-book based beamforming protocol to setup multi-Gbps mm-wave communication links @cite_3 . Hur design a beam alignment mechanism for mm-wave backhauling scenarios, tackling the effects of wind-induced beam misalignment @cite_1 . With mandatory use of beamforming, however, terminal deafness becomes a key challenge when scheduling transmissions receptions @cite_36 . The throughput and energy consumption characteristics of different mm-wave bands are studied in @cite_5 . While we do not explicitly address energy efficiency aspects in our work, we recognise that a certain degree of energy efficiency can be inherently achieved through optimal airtime allocation and scheduling, which is at the core of our work. | {
"cite_N": [
"@cite_22",
"@cite_36",
"@cite_54",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_5"
],
"mid": [
"2120645748",
"1536549345",
"2097395379",
"2034651337",
"2166486216",
"2116334496",
""
],
"abstract": [
"The spectrum congestion experienced in today's common cellular bands has led to research and measurements to explore the vast bandwidths available at millimeter waves (mmWaves). NYU WIRELESS conducted E-band propagation measurements for both mobile and backhaul scenarios in 2013 in the dense urban environment of New York City using a sliding correlator channel sounder, by transmitting a 400 Mega chip per second (Mcps) PN sequence with a power delay profile (PDP) multipath time resolution of 2.5 ns. Measurements were made for more than 30 transmitter-to-receiver location combinations for both mobile and backhaul scenarios with separation distances up to 200 m. This paper presents results that support the use of directional steerable antennas at mmWave bands in order to achieve comparable path loss models and channel statistics to today's current cellular systems and at 28 GHz. These early results reveal that the mmWave spectrum, specifically the E-band, could be used for future cellular communications by exploiting multipath in urban environments with the help of beam-steering and beam combining.",
"With the ratification of the IEEE 802.11ad amendment to the 802.11 standard in December 2012, a major step has been taken to bring consumer wireless communication to the millimeter wave (mm-Wave) band. However, multi-Gbps throughput and small interference footprint come at the price of adverse signal propagation characteristics and require a fundamental rethinking of Wi-Fi communication principles. This paper describes the design assumptions taken into consideration for the IEEE 802.11ad standard and the novel techniques defined to overcome the challenges of mm-Wave communication. In particular we study the transition from omni-directional to highly directional communication and its impact on the design of IEEE 802.11ad.",
"We investigate spatial interference statistics for multigigabit outdoor mesh networks operating in the unlicensed 60-GHz \"millimeter (mm) wave\" band. The links in such networks are highly directional: Because of the small carrier wavelength (an order of magnitude smaller than those for existing cellular and wireless local area networks), narrow beams are essential for overcoming higher path loss and can be implemented using compact electronically steerable antenna arrays. Directionality drastically reduces interference, but it also leads to \"deafness,\" making implicit coordination using carrier sense infeasible. In this paper, we make a quantitative case for rethinking medium access control (MAC) design in such settings. Unlike existing MAC protocols for omnidirectional networks, where the focus is on interference management, we contend that MAC design for 60-GHz mesh networks can essentially ignore interference and must focus instead on the challenge of scheduling half-duplex transmissions with deaf neighbors. Our main contribution is an analytical framework for estimating the collision probability in such networks as a function of the antenna patterns and the density of simultaneously transmitting nodes. The numerical results from our interference analysis show that highly directional links can indeed be modeled as pseudowired, in that the collision probability is small even with a significant density of transmitters. Furthermore, simulation of a rudimentary directional slotted Aloha protocol shows that packet losses due to failed coordination are an order of magnitude higher than those due to collisions, confirming our analytical results and highlighting the need for more sophisticated coordination mechanisms.",
"Recently, there has been considerable interest in new tiered network cellular architectures, which would likely use many more cell sites than found today. Two major challenges will be i) providing backhaul to all of these cells and ii) finding efficient techniques to leverage higher frequency bands for mobile access and backhaul. This paper proposes the use of outdoor millimeter wave communications for backhaul networking between cells and mobile access within a cell. To overcome the outdoor impairments found in millimeter wave propagation, this paper studies beamforming using large arrays. However, such systems will require narrow beams, increasing sensitivity to movement caused by pole sway and other environmental concerns. To overcome this, we propose an efficient beam alignment technique using adaptive subspace sampling and hierarchical beam codebooks. A wind sway analysis is presented to establish a notion of beam coherence time. This highlights a previously unexplored tradeoff between array size and wind-induced movement. Generally, it is not possible to use larger arrays without risking a corresponding performance loss from wind-induced beam misalignment. The performance of the proposed alignment technique is analyzed and compared with other search and alignment methods. The results show significant performance improvement with reduced search time.",
"In order to realize high speed, long range, reliable transmission in millimeter-wave 60 GHz wireless personal area networks (60 GHz WPANs), we propose a beamforming (BF) protocol realized in media access control (MAC) layer on top of multiple physical layer (PHY) designs. The proposed BF protocol targets to minimize the BF set-up time and to mitigate the high path loss of 60 GHz WPAN systems. It consists of 3 stages, namely the device (DEV) to DEV linking, sector-level searching and beam-level searching. The division of the stages facilitates significant reduction in setup time as compared to BF protocols with exhaustive searching mechanisms. The proposed BF protocol employs discrete phase-shifters, which significantly simplifies the structure of DEVs as compared to the conventional BF with phase-and-amplitude adjustment, at the expense of a gain degradation of less than 1 dB. The proposed BF protocol is a complete design and PHY-independent, it is applicable to different antenna configurations. Simulation results show that the setup time of the proposed BF protocol is as small as 2 when compared to the exhaustive searching protocol. Furthermore, based on the codebooks with four phases per element, around 15.1 dB gain is achieved by using eight antenna elements at both transmitter and receiver, thereby enabling 1.6 Gbps-data-streaming over a range of three meters. Due to the flexibility in supporting multiple PHY layer designs, the proposed protocol has been adopted by the IEEE 802.15.3c as an optional functionality to realize Gbps communication systems.",
"The global bandwidth shortage facing wireless carriers has motivated the exploration of the underutilized millimeter wave (mm-wave) frequency spectrum for future broadband cellular communication networks. There is, however, little knowledge about cellular mm-wave propagation in densely populated indoor and outdoor environments. Obtaining this information is vital for the design and operation of future fifth generation cellular networks that use the mm-wave spectrum. In this paper, we present the motivation for new mm-wave cellular systems, methodology, and hardware for measurements and offer a variety of measurement results that show 28 and 38 GHz frequencies can be used when employing steerable directional antennas at base stations and mobile devices.",
""
]
} |
1709.07078 | 2937802149 | 5G mobile networks are expected to provide pervasive high speed wireless connectivity, to support increasingly resource intensive user applications. Network hyper-densification therefore becomes necessary, though connecting to the Internet tens of thousands of base stations is non-trivial, especially in urban scenarios where optical fibre is difficult and costly to deploy. The millimetre wave (mm-wave) spectrum is a promising candidate for inexpensive multi-Gbps wireless backhauling, but exploiting this band for effective multi-hop data communications is challenging. In particular, resource allocation and scheduling of very narrow transmission reception beams requires to overcome terminal deafness and link blockage problems, while managing fairness issues that arise when flows encounter dissimilar competition and traverse different numbers of links with heterogeneous quality. In this paper, we propose WiHaul, an airtime allocation and scheduling mechanism that overcomes these challenges specific to multi-hop mm-wave networks, guarantees max-min fairness among traffic flows, and ensures the overall available backhaul resources are fully utilised. We evaluate the proposed WiHaul scheme over a broad range of practical network conditions, and demonstrate up to 5 times individual throughput gains and a fivefold improvement in terms of measurable fairness, over recent mm-wave scheduling solutions. | Hemanth and Venkatesh analyse the performance of the 802.11ad SP mechanism in terms of frame delay @cite_52 . Several works build upon the 802.11ad standard and specify MAC protocol improvements for single-hop WLANs @cite_37 @cite_46 @cite_14 . Chandra employ adaptive beamwidth to achieve improved channel utilisation @cite_37 . Sim exploit dual-band channel access to address terminal deafness and improve throughput @cite_46 . Optimal client association and airtime allocation is pursued in @cite_41 to maximise the utility of enterprise mm-wave deployments. | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_41",
"@cite_52",
"@cite_46"
],
"mid": [
"2018674205",
"2162844094",
"2964271585",
"2323841200",
"2566251254"
],
"abstract": [
"",
"In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.",
"Abstract Millimetre-wave (mmWave) technology is a promising candidate for meeting the intensifying demand for ultra fast wireless connectivity, especially in high-end enterprise networks. Very narrow beam forming is mandatory to mitigate the severe attenuation specific to the extremely high frequency (EHF) bands exploited. Simultaneously, this greatly reduces interference, but generates problematic communication blockages. As a consequence, client association control and scheduling in scenarios with densely deployed mmWave access points become particularly challenging, while policies designed for traditional wireless networks remain inappropriate. In this paper we formulate and solve these tasks as utility maximisation problems under different traffic regimes, for the first time in the mmWave context. We specify a set of low-complexity algorithms that capture distinctive terminal deafness and user demand constraints, while providing near-optimal client associations and airtime allocations, despite the problems’ inherent NP-completeness. To evaluate our solutions, we develop an NS-3 implementation of the IEEE 802.11ad protocol, which we construct upon preliminary 60GHz channel measurements. Simulation results demonstrate that our schemes provide up to 60 higher throughput as compared to the commonly used signal strength based association policy for mmWave networks, and outperform recently proposed load-balancing oriented solutions, as we accommodate the demand of 33 more clients in both static and mobile scenarios.",
"We present an analytical model for the access during the Service Periods (SP) of the IEEE 802.11ad Hybrid Medium Access Control protocol. As a performance measure of this protocol, we derive the worst case and average delay faced by the SP packets. We show that as the arrival rate of the SP packets increases, the delay increases linearly till a point, beyond which it grows exponentially. Further, we extend the model to variable length of beacon interval, and random allocation of SPs to the nodes. We show how a network designer can do optimal allocation of the SP and the CBAP duration to achieve a tradeoff between SP delay and CBAP throughput. We further extend our analysis for the case of heterogeneous system. Our analytical results are compared with simulation and the results show a good match.",
"Achieving multi-gigabit per second data rates, millimeter wave communication promises to accommodate future and current demands for very high speed wireless data transmission. However, the mandatory use of directional antennas brings significant challenges for the design of efficient MAC layer mechanisms. In particular, IEEE 802.11ad for the 60 GHz band lacks omni-directional transmissions and carrier sensing. This prevents stations from overhearing the actions of other stations, the so called “deafness” problem, which substantially impairs the efficiency and fairness of CSMA CA medium access. Most existing solutions to this problem depend on properties of lower frequency bands and thus do not apply to 60 GHz. In this paper, we propose a dual-band MAC protocol combining 60 GHz communication with co-existing 5 GHz interfaces. By broadcasting control messages on 5 GHz frequencies, we solve the deafness problem and can use the 60 GHz band exclusively for high rate data transmission. While our approach occupies air time on the 5 GHz band for control messages, it does achieve a net throughput gain (over both bands) of up to 65.3 compared to IEEE 802.11ad. In addition, our simulation results show an improvement of MAC fairness of up to 42.8 over IEEE 802.11ad."
]
} |
1709.07078 | 2937802149 | 5G mobile networks are expected to provide pervasive high speed wireless connectivity, to support increasingly resource intensive user applications. Network hyper-densification therefore becomes necessary, though connecting to the Internet tens of thousands of base stations is non-trivial, especially in urban scenarios where optical fibre is difficult and costly to deploy. The millimetre wave (mm-wave) spectrum is a promising candidate for inexpensive multi-Gbps wireless backhauling, but exploiting this band for effective multi-hop data communications is challenging. In particular, resource allocation and scheduling of very narrow transmission reception beams requires to overcome terminal deafness and link blockage problems, while managing fairness issues that arise when flows encounter dissimilar competition and traverse different numbers of links with heterogeneous quality. In this paper, we propose WiHaul, an airtime allocation and scheduling mechanism that overcomes these challenges specific to multi-hop mm-wave networks, guarantees max-min fairness among traffic flows, and ensures the overall available backhaul resources are fully utilised. We evaluate the proposed WiHaul scheme over a broad range of practical network conditions, and demonstrate up to 5 times individual throughput gains and a fivefold improvement in terms of measurable fairness, over recent mm-wave scheduling solutions. | A directional cooperative MAC protocol is introduced in @cite_14 , where user devices select intermediate nodes to relay the packets to the AP, in order to establish multi-hop paths that exhibit higher signal-to-noise ratio (SNR) than direct links. Mandke and Nettles propose a dual-band architecture for multi-hop 60GHz networks where scheduling and routing decisions are communicated at 5.2GHz @cite_9 . Based on their feasibility study of in-band wireless backhauling, Taori present a qualitative scheduling framework for inter-base station communications @cite_53 . This resembles closely the Type 2 TDD scheme of LTE, with the difference that the authors apply it to in-band backhauling scenarios, whereas in the LTE standard this is specified for cellular access only. Despite considering the implications of terminal deafness, these designs do not tackle the airtime allocation problem. Relay selection so as to overcome blockage and scheduling in mm-wave backhauls is tackled in @cite_27 , with the aim of maximising throughput. However, neither airtime allocation nor fairness are taken into account. | {
"cite_N": [
"@cite_27",
"@cite_9",
"@cite_14",
"@cite_53"
],
"mid": [
"2770079622",
"",
"2162844094",
"2084592718"
],
"abstract": [
"Millimeter wave (mmWave) communication is a key enabling technology for 5G cellular systems. However, due to mmWave propagation characteristics, link length for very high rates is limited and will likely necessitate the use of relay nodes for longer-range ultra-high-speed backhaul communications. This paper investigates relay selection and scheduling to support high end-to-end throughput in mmWave relay-assisted backhaul networks in urban environments. A major challenge in urban environments is the presence of large obstacles (buildings) that block long line-of-sight paths, which arenecessary for very high capacity mmWave links. Using a 3D model for buildings targeted at urban environments, we provide optimal and efficient algorithms both for scheduling communications along a single mmWave relay-assisted path and for choosing the relay-assisted path with maximum throughput among all candidate paths connecting a given base station pair. In addition to proving optimality of these algorithms, we evaluate their performance through simulations based on a real urban topology. Simulation results show that our algorithms can produce short relay paths with end-to-end throughputs of around 10 Gbps and higher that are capable of providing virtual mmWave links for a wireless backhaul use case. Our algorithms improve throughput from 23 to 49 over a range of settings, as compared to average relay paths, and throughput can be more than doubled compared to some relay path choices with similar numbers of relays.",
"",
"In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.",
"Cost-effective and scalable wireless backhaul solutions are essential for realizing the 5G vision of providing gigabits per second anywhere. Not only is wireless backhaul essential to support network densification based on small cell deployments, but also for supporting very low latency inter-BS communication to deal with intercell interference. Multiplexing backhaul and access on the same frequency band (in-band wireless backhaul) has obvious cost benefits from the hardware and frequency reuse perspective, but poses significant technology challenges. We consider an in-band solution to meet the backhaul and inter-BS coordination challenges that accompany network densification. Here, we present an analysis to persuade the readers of the feasibility of in-band wireless backhaul, discuss realistic deployment and system assumptions, and present a scheduling scheme for inter- BS communications that can be used as a baseline for further improvement. We show that an inband wireless backhaul for data backhauling and inter-BS coordination is feasible without significantly hurting the cell access capacities."
]
} |
1709.07078 | 2937802149 | 5G mobile networks are expected to provide pervasive high speed wireless connectivity, to support increasingly resource intensive user applications. Network hyper-densification therefore becomes necessary, though connecting to the Internet tens of thousands of base stations is non-trivial, especially in urban scenarios where optical fibre is difficult and costly to deploy. The millimetre wave (mm-wave) spectrum is a promising candidate for inexpensive multi-Gbps wireless backhauling, but exploiting this band for effective multi-hop data communications is challenging. In particular, resource allocation and scheduling of very narrow transmission reception beams requires to overcome terminal deafness and link blockage problems, while managing fairness issues that arise when flows encounter dissimilar competition and traverse different numbers of links with heterogeneous quality. In this paper, we propose WiHaul, an airtime allocation and scheduling mechanism that overcomes these challenges specific to multi-hop mm-wave networks, guarantees max-min fairness among traffic flows, and ensures the overall available backhaul resources are fully utilised. We evaluate the proposed WiHaul scheme over a broad range of practical network conditions, and demonstrate up to 5 times individual throughput gains and a fivefold improvement in terms of measurable fairness, over recent mm-wave scheduling solutions. | Su and Zhang solve optimal network throughput allocation heuristically in multi-channel settings, without fairness guarantees @cite_48 . Ford target sum utility maximisation in self-backhauled mm-wave setting @cite_40 . Seminari formulate the sharing of mm-wave backhauls as a one-to-many matching game, seeking to maximise the average sum rate @cite_47 . Zhu propose a maximum independent set (MIS) based scheduling algorithm to maximise QoS in mm-wave backhauls @cite_21 . Similarly, Niu propose MIS based scheduling that aims to minimise the energy consumption @cite_29 . A joint scheduling and power allocation problem is also solved with MIS in @cite_44 . In this body of work scheduling is performed with the explicit goal of achieving concurrent transmissions among non-interfering links. The WiHaul mechanism we propose allows for concurrent transmissions by default. Moreover, WiHaul not only improves throughput performance, but also explicitly addresses fairness, while we take into account all flow demands, link rates, and the level of competition among them. In particular, we address airtime allocation and scheduling in multi-hop mm-wave networks using the max-min fairness criterion. | {
"cite_N": [
"@cite_48",
"@cite_29",
"@cite_21",
"@cite_44",
"@cite_40",
"@cite_47"
],
"mid": [
"2168199950",
"2234862916",
"2501770904",
"2612523361",
"1581040617",
"2611421367"
],
"abstract": [
"Since the unlicensed 60 GHz band has the extensively wide continuous spectrum and its corresponding millimeter-wave signal has high directivity gain, the 60 GHz band is a good option for the broadband wireless mesh networks. This paper focuses on link scheduling and routing over the 60 GHz multi-channel wireless mesh networks, where each mesh router has multiple radios and multiple directional antennas. We formulate a linear programming based framework, which incorporates multi-channel and multi-radio, directional antenna, and 60 GHz millimeter-wave communications, to model the network throughput of the directional antenna based 60 GHz mesh networks. Under this framework, we derive the solution to the problem of maximizing the network throughput subject to the fairness constraint and the directional-antenna based wireless channel interference constraint. Then, we design a heuristic joint link scheduling and routing scheme which aims at approximately attaining the optimal solution to the joint optimization problem under our proposed framework. We conduct extensive simulations to validate and evaluate our proposed scheme.",
"Heterogeneous cellular networks (HCNs) are emerging as a promising candidate for the fifth-generation (5G) mobile network. With base stations (BSs) of small cells densely deployed, the cost-effective, flexible, and green backhaul solution has become one of the most urgent and critical challenges. With vast amounts of spectrum available, wireless backhaul in the millimeter-wave (mmWave) band is able to provide transmission rates of several gigabits per second. The mmWave backhaul utilizes beamforming to achieve directional transmission, and concurrent transmissions under low interlink interference can be enabled to improve network capacity. To achieve an energy-efficient solution for mmWave backhauling, we first formulate the problem of minimizing the energy consumption via concurrent transmission scheduling and power control into a mixed integer nonlinear program (MINLP). Then, we develop an energy-efficient and practical mmWave backhauling scheme, which consists of the maximum independent set (MIS)-based scheduling algorithm and the power control algorithm. We also theoretically analyze the conditions that our scheme reduces energy consumption, as well as the choice of the interference threshold. Through extensive simulations under various traffic patterns and system parameters, we demonstrate the superior performance of our scheme in terms of energy efficiency and analyze the choice of the interference threshold under different traffic loads, BS distributions, and the maximum transmission power.",
"With the explosive growth of mobile data demand, small cells densely deployed underlying the homogeneous macro-cells are emerging as a promising candidate for the fifth generation (5G) mobile network. The backhaul communication for small cells poses a significant challenge, and with huge bandwidth available in the mmWave band, the wireless backhaul at mmWave frequencies can be a promising backhaul solution for small cells. In this paper, we propose the Maximum QoS-aware Independent Set (MQIS) based scheduling algorithm for the mmWave backhaul network of small cells to maximize the number of flows with their QoS requirements satisfied. In the algorithm, concurrent transmissions and the QoS aware priority are exploited to achieve more successfully scheduled flows and higher network throughput. Simulations in the 73 GHz band are conducted to demonstrate the superior performance of our algorithm in terms of the number of successfully scheduled flows and the system throughput compared with other existing schemes.",
"Millimeter wave (mm-wave) frequencies provide orders of magnitude larger spectrum than current cellular allocations and allow usage of high dimensional antenna arrays for exploiting beamforming and spatial multiplexing. This paper addresses the problem of joint scheduling and radio resource allocation optimization in mm-wave heterogeneous networks where mm-wave small cells are densely deployed underlying the conventional homogeneous macro cells. Furthermore, mm-wave small cells operate in time division duplexing mode and share the same spectrum and air-interface for backhaul and access links. The scheme proposed in this paper can significantly enhance network throughput by exploiting space-division multiple access, i.e., allowing non-conflicting flows to be transmitted simultaneously. The optimization problem of maximizing network throughput is formulated as a mixed integer nonlinear programming problem. To find a practical solution, this is decomposed into three steps: concurrent transmission scheduling, time resource allocation, and power allocation. A maximum independent set based algorithm is developed for concurrent transmission scheduling to improve resource utilization efficiency with low computational complexity. Through extensive simulations, we demonstrate that the proposed algorithm achieves significant gain over benchmark schemes in terms of user throughput.",
"Millimeter wave (mmW) bands between 30 and 300 GHz have attracted considerable attention for nextgeneration cellular networks due to vast quantities of available spectrum and the possibility of very high-dimensional antenna arrays. However, a key issue in these systems is range: mmW signals are extremely vulnerable to shadowing and poor high-frequency propagation. Multi-hop relaying is therefore a natural technology for such systems to improve cell range and cell edge rates without the addition of wired access points. This paper studies the problem of scheduling for a simple infrastructure cellular relay system where communication between wired base stations and User Equipment follow a hierarchical tree structure through fixed relay nodes. Such a systems builds naturally on existing cellular mmW backhaul by adding mmW in the access links. A key feature of the proposed system is that TDD duplexing selections can be made on a link-by-link basis due to directional isolation from other links. We devise an efficient, greedy algorithm for centralized scheduling that maximizes network utility by jointly optimizing the duplexing schedule and resources allocation for dense, relay-enhanced OFDMA TDD mmW networks. The proposed algorithm can dynamically adapt to loading, channel conditions and traffic demands. Significant throughput gains and improved resource utilization offered by our algorithm over the static, globally-synchronized TDD patterns are demonstrated through simulations based on empirically-derived channel models at 28 GHz.",
"In this paper, a novel framework is proposed for optimizing the operation and performance of a large-scale multi-hop millimeter wave (mmW) backhaul within a wireless small cell network having multiple mobile network operators (MNOs). The proposed framework enables the small base stations to jointly decide on forming the multi-hop, mmW links over backhaul infrastructure that belongs to multiple, independent MNOs, while properly allocating resources across those links. In this regard, the problem is addressed using a novel framework based on matching theory composed of two, highly inter-related stages: a multi-hop network formation stage and a resource management stage. One unique feature of this framework is that it jointly accounts for both wireless channel characteristics and economic factors during both network formation and resource management. The multi-hop network formation stage is formulated as a one-to-many matching game, which is solved using a novel algorithm, that builds on the so-called deferred acceptance algorithm and is shown to yield a stable and Pareto optimal multi-hop mmW backhaul network. Then, a one-to-many matching game is formulated to enable proper resource allocation across the formed multi-hop network. This game is then shown to exhibit peer effects and, as such, a novel algorithm is developed to find a stable and optimal resource management solution that can properly cope with these peer effects. Simulation results show that, with manageable complexity, the proposed framework yields substantial gains, in terms of the average sum rate, reaching up to 27 and 54 , respectively, compared with a non-cooperative scheme in which inter-operator sharing is not allowed and a random allocation approach. The results also show that our framework improves the statistics of the backhaul sum rate and provides insights on how to manage pricing and the cost of the cooperative mmW backhaul network for the MNOs."
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | DE is about the difference of two or more vectors. Amplification of the difference vector is adjustable by the mutation scale factor @math which controls the evolving rate of the population. A small mutation scale factor decreases the exploration and may result in premature convergence. This is while a large mutation scale factor increases the exploration which results in a longer convergence time. The optimal value of @math is usually set based on the nature of the problem and experimental observations @cite_26 . MDEVM algorithm @cite_13 relaxes static selection of @math as a hyper-parameter by generating a random @math for each dimension of an individual in the population @cite_13 . This randomness adds diversity to the population by amplifying the differences vectors in various scales. The experimental results in @cite_12 have demonstrated outstanding performance of this technique for diversity enhancement. | {
"cite_N": [
"@cite_26",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"1988953708",
"2952134149"
],
"abstract": [
"",
"One of the main disadvantages of population-based evolutionary algorithms (EAs) is their high computational cost due to the nature of evaluation, specially when the population size is large. The micro-algorithms employ a very small number of individuals, which can accelerate the convergence speed of algorithms dramatically, while it highly increases the stagnation risk. One approach to overcome the stagnation problem can be increasing the diversity of the population. To do so, a microdifferential evolution with vectorized random mutation factor (MDEVM) algorithm is proposed in this paper, which utilizes the small size population benefit while preventing stagnation through diversification of the population. The proposed algorithm is tested on the 28 benchmark functions provided at the IEEE congress on evolutionary computation 2013 (CEC-2013). Simulation results on the benchmark functions demonstrate that the proposed algorithm improves the convergence speed of its parent algorithm.",
"The differential evolution (DE) algorithm suffers from high computational time due to slow nature of evaluation. In contrast, micro-DE (MDE) algorithms employ a very small population size, which can converge faster to a reasonable solution. However, these algorithms are vulnerable to a premature convergence as well as to high risk of stagnation. In this paper, MDE algorithm with vectorized random mutation factor (MDEVM) is proposed, which utilizes the small size population benefit while empowers the exploration ability of mutation factor through randomizing it in the decision variable level. The idea is supported by analyzing mutation factor using Monte-Carlo based simulations. To facilitate the usage of MDE algorithms with very-small population sizes, new mutation schemes for population sizes less than four are also proposed. Furthermore, comprehensive comparative simulations and analysis on performance of the MDE algorithms over various mutation schemes, population sizes, problem types (i.e. uni-modal, multi-modal, and composite), problem dimensionalities, and mutation factor ranges are conducted by considering population diversity analysis for stagnation and trapping in local optimum situations. The studies are conducted on 28 benchmark functions provided for the IEEE CEC-2013 competition. Experimental results demonstrate high performance and convergence speed of the proposed MDEVM algorithm."
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | The memetic algorithms are hybrid evolutionary algorithms. These algorithms can solve optimization problems by utilizing deterministic local search within the evolutionary process. This technique along with the cDE algorithm is used to develop optimization algorithms on control cards @cite_15 . The objective is to design an optimization algorithm that can function in absence of full power computational systems. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2057752717"
],
"abstract": [
"This article deals with optimization problems to be solved in the absence of a full power computer device. The goal is to solve a complex optimization problem by using a control card related to portable devices, e.g. for the control of commercial robots. In order to handle this class of optimization problems, a novel Memetic Computing approach is presented. The proposed algorithm employs a Differential Evolution framework which instead of processing an actual population of candidate solutions, makes use of a statistical representation of the population which evolves over time. In addition, the framework uses a stochastic local search algorithm which attempts to enhance the performance of the elite. In this way, the memetic logic of performing the optimization by observing the decision space from complementary perspectives can be integrated within computational devices characterized by a limited memory. The proposed algorithm, namely Memetic compact Differential Evolution (McDE), has been tested and compared with other algorithms belonging to the same category for a real-world industrial application, i.e. the control system design of a cartesian robot for variable mass movements. For this real-world application, the proposed McDE displays high performance and has proven to considerably outperform other compact algorithms representing the current state-of-the-art in this sub-field of computational intelligence."
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | A novel type of ensemble DE, called MPEDE, is proposed in @cite_28 . It is a multi-population based approach which deploys a dynamic ensemble of multiple mutation strategies. It controls parameters of the algorithms such as mutation scale factor and crossover rate @cite_35 . This method uses the current-to-pbest 1", current-to-rand 1", and rand 1" mutation schemes @cite_28 . Another method for unconstrained continuous optimization problems is @math JADE @cite_38 . It is an adaptive DE with a small population size approach. The @math JADE uses a new mutation operator, called current-by-rand-to-pbest @cite_38 . | {
"cite_N": [
"@cite_28",
"@cite_35",
"@cite_38"
],
"mid": [
"2134154181",
"",
"1192856402"
],
"abstract": [
"A multi-population based approach is proposed to realize the adapted ensemble of multiple strategies of differential evolution.The control parameters of each mutation strategy are adapted independently.Extensive experiments are conducted to test the performance of multi-population ensemble DE (MPEDE). Differential evolution (DE) is among the most efficient evolutionary algorithms (EAs) for global optimization and now widely applied to solve diverse real-world applications. As the most appropriate configuration of DE to efficiently solve different optimization problems can be significantly different, an appropriate combination of multiple strategies into one DE variant attracts increasing attention recently. In this study, we propose a multi-population based approach to realize an ensemble of multiple strategies, thereby resulting in a new DE variant named multi-population ensemble DE (MPEDE) which simultaneously consists of three mutation strategies, i.e., \"current-to-pbest 1\" and \"current-to-rand 1\" and \"rand 1\". There are three equally sized smaller indicator subpopulations and one much larger reward subpopulation. Each constituent mutation strategy has one indicator subpopulation. After every certain number of generations, the current best performing mutation strategy will be determined according to the ratios between fitness improvements and consumed function evaluations. Then the reward subpopulation will be allocated to the determined best performing mutation strategy dynamically. As a result, better mutation strategies obtain more computational resources in an adaptive manner during the evolution. The control parameters of each mutation strategy are adapted independently as well. Extensive experiments on the suit of CEC 2005 benchmark functions and comprehensive comparisons with several other efficient DE variants show the competitive performance of the proposed MPEDE (Matlab codes of MPEDE are available from http: guohuawunudt.gotoip2.com publications.html).",
"",
"This paper proposes a new differential evolution (DE) algorithm for unconstrained continuous optimisation problems, termed @math μJADE, that uses a small or micro' ( @math μ) population. The main contribution of the proposed DE is a new mutation operator, current-by-rand-to-pbest.' With a population size less than 10, @math μJADE is able to solve some classical multimodal benchmark problems of 30 and 100 dimensions as reliably as some state-of-the-art DE algorithms using conventionally sized populations. The algorithm also compares favourably to other small population DE variants and classical DE."
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | Random perturbation method utilizes the mutation idea from the genetic algorithm (GA) to randomly change the population vector parameters with a fixes probability @cite_8 . A modified DE with smaller population size can add disturbance to the mutation operation @cite_18 . An adaptive system controls the intensity of disturbance based on the performance improvement during generations. A combination of modified Breeder GA mutation scheme and a random mutation scheme can help DE to avoid stagnation and or premature convergence @cite_11 . | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_8"
],
"mid": [
"2021442909",
"2075524109",
""
],
"abstract": [
"As one of the popular evolutionary algorithms, differential evolution (DE) shows outstanding convergence rate on continuous optimization problems. But prematurity probably still occurs in classical DE when using relatively small population, which is discussed in this paper. Considering that large population may significantly raise the computational effort, we propose a modified DE using smaller population (DESP) by introducing extra disturbance to its mutation operation. In addition, an adaptive adjustment scheme is designed to control the disturbance intensity according to the improvement during the evolution. To test the performance of DESP, two groups of experiments are conducted. The results show that DESP outperforms DE in terms of convergence rate and accuracy.",
"Abstract The purpose of this paper is to present a new and an alternative differential evolution (ADE) algorithm for solving unconstrained global optimization problems. In the new algorithm, a new directed mutation rule is introduced based on the weighted difference vector between the best and the worst individuals of a particular generation. The mutation rule is combined with the basic mutation strategy through a linear decreasing probability rule. This modification is shown to enhance the local search ability of the basic DE and to increase the convergence rate. Two new scaling factors are introduced as uniform random variables to improve the diversity of the population and to bias the search direction. Additionally, a dynamic non-linear increased crossover probability scheme is utilized to balance the global exploration and local exploitation. Furthermore, a random mutation scheme and a modified Breeder Genetic Algorithm (BGA) mutation scheme are merged to avoid stagnation and or premature convergence. Numerical experiments and comparisons on a set of well-known high dimensional benchmark functions indicate that the improved algorithm outperforms and is superior to other existing algorithms in terms of final solution quality, success rate, convergence rate, and robustness.",
""
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | Large population size in DE algorithm causes high computational cost and adds more exploration ability to the population. The small population size in MDE algorithm reduces the number of function evaluations (per generation) with the cost of less exploration capability and a higher risk of premature convergence @cite_13 . | {
"cite_N": [
"@cite_13"
],
"mid": [
"1988953708"
],
"abstract": [
"One of the main disadvantages of population-based evolutionary algorithms (EAs) is their high computational cost due to the nature of evaluation, specially when the population size is large. The micro-algorithms employ a very small number of individuals, which can accelerate the convergence speed of algorithms dramatically, while it highly increases the stagnation risk. One approach to overcome the stagnation problem can be increasing the diversity of the population. To do so, a microdifferential evolution with vectorized random mutation factor (MDEVM) algorithm is proposed in this paper, which utilizes the small size population benefit while preventing stagnation through diversification of the population. The proposed algorithm is tested on the 28 benchmark functions provided at the IEEE congress on evolutionary computation 2013 (CEC-2013). Simulation results on the benchmark functions demonstrate that the proposed algorithm improves the convergence speed of its parent algorithm."
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | A population size adaptation method is proposed in @cite_16 which measures the Euclidean distance between individuals and finds out if the population diversity is poor and or it moves toward stagnation. Depending on the search situation, it generates new individuals. A population size reduction method for cDE is proposed in @cite_31 , where the population size gradually reduces during the evolution. Cumu-DE is an adaptive DE and the effective population size is adapted automatically using mechanism based on a probabilistic model @cite_19 . The term effective refers to the effective part of the population where its size shrinks as the algorithm has more successful trials. Gradual reduction of the population size is another approach that has demonstrated higher robustness and efficiency compared to generic DE @cite_25 . | {
"cite_N": [
"@cite_19",
"@cite_31",
"@cite_16",
"@cite_25"
],
"mid": [
"2046737544",
"",
"2007220325",
"2018188932"
],
"abstract": [
"A new adaptive Differential Evolution algorithm called EWMA-DE is proposed. In original Differential Evolution algorithm three different control parameter values must be pre-specified by the user a priori; Population size, crossover constant and mutation scale factor. Choosing good parameters can be very difficult for the user, especially for the practitioners. In the proposed algorithm the mutation scale factor is adapted using a novel exponential moving average based mechanism, while the other control parameters are kept fixed as in standard Differential Evolution. The algorithm was initially evaluated by using the set of 25 benchmark functions provided by CEC2005 special session on real-parameter optimization and compared with the results of standard DE rand 1 bin version. Results turned out to be rather promising; EWMA-DE outperformed the original Differential Evolution in majority of tested cases, which is demonstrating the potential of the proposed adaptation approach.",
"",
"In differential evolution (DE), there are many adaptive algorithms proposed for parameters adaptation. However, they mainly aim at tuning the amplification factor F and crossover probability CR. When the population diversity is at a low level or the population becomes stagnant, the population is not able to improve any more. To enhance the performance of DE algorithms, in this paper, we propose a method of population adaptation. The proposed method can identify the moment when the population diversity is poor or the population stagnates by measuring the Euclidean distances between individuals of a population. When the moment is identified, the population will be regenerated to increase diversity or to eliminate the stagnation issue. The population adaptation is incorporated into the jDE algorithm and is tested on a set of 25 scalable CEC05 benchmark functions. The results show that the population adaptation can significantly improve the performance of the jDE algorithm. Even if the population size of jDE is small, the jDE algorithm with population adaptation also has a superior performance in comparisons with several other peer algorithms for high-dimension function optimization.",
"This paper studies the efficiency of a recently defined population-based direct global optimization method called Differential Evolution with self-adaptive control parameters. The original version uses fixed population size but a method for gradually reducing population size is proposed in this paper. It improves the efficiency and robustness of the algorithm and can be applied to any variant of a Differential Evolution algorithm. The proposed modification is tested on commonly used benchmark problems for unconstrained optimization and compared with other optimization methods such as Evolutionary Algorithms and Evolution Strategies."
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | OBL has Type-I and Type-II schemes @cite_3 , @cite_1 . The Type-I scheme has enhanced the performance of micro-ODE for image thresholding @cite_0 . This approach showed a better performance compared to the MDE algorithm. | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_3"
],
"mid": [
"2107722300",
"2043846600",
"2162745921"
],
"abstract": [
"Image thresholding is a challenging task in image processing field. Many efforts have already been made to propose universal, robust methods to handle a wide range of images. Previously by the same authors, an optimization-based thresholding approach was introduced. According to the proposed approach, differential evolution (DE) algorithm, minimizes dissimilarity between the input grey-level image and the bi-level (thresholded) image. In the current paper, micro opposition-based differential evolution (micro-ODE), DE with very small population size and opposition-based population initialization, has been proposed. Then, it is compared with a well-known thresholding method, Kittler algorithm and also with its non-opposition-based version (micro-DE). In overall, the proposed approach outperforms Kittler method over 16 challenging test images. Furthermore, the results confirm that the micro-ODE is faster than micro-DE because of embedding the opposition-based population initialization.",
"The concept of opposition-based learning (OBL) can be categorized into Type-I and Type-II OBL methodologies. The Type-I OBL is based on the opposite points in the variable space while the Type-II OBL considers the opposite of function value on the landscape. In the past few years, many research works have been conducted on development of Type-I OBL-based approaches with application in science and engineering, such as opposition-based differential evolution (ODE). However, compared to Type-I OBL, which cannot address a real sense of opposition in term of objective value, the Type-II OBL is capable to discover more meaningful knowledge about problem’s landscape. Due to natural difficulty of proposing a Type-II-based approach, very limited research has been reported in that direction. In this paper, for the first time, the concept of Type-II OBL has been investigated in detail in optimization; also it is applied on the DE algorithm as a case study. The proposed algorithm is called oppositionbased differential evolution Type-II (ODE-II) algorithm; it is validated on the testbed proposed for the IEEE Congress on Evolutionary Computation 2013 (IEEE CEC-2013) contest with 28 benchmark functions. Simulation results on the benchmark functions demonstrate the effectiveness of the proposed method as the first step for further developments in Type-II OBL-based schemes.",
"Opposition-based learning as a new scheme for machine intelligence is introduced. Estimates and counter-estimates, weights and opposite weights, and actions versus counter-actions are the foundation of this new approach. Examples are provided. Possibilities for extensions of existing learning algorithms are discussed. Preliminary results are provided"
]
} |
1709.06909 | 2758611397 | Differential evolution (DE) algorithm with a small population size is called Micro-DE (MDE). A small population size decreases the computational complexity but also reduces the exploration ability of DE by limiting the population diversity. In this paper, we propose the idea of combining ensemble mutation scheme selection and opposition-based learning concepts to enhance the diversity of population in MDE at mutation and selection stages. The proposed algorithm enhances the diversity of population by generating a random mutation scale factor per individual and per dimension, randomly assigning a mutation scheme to each individual in each generation, and diversifying individuals selection using opposition-based learning. This approach is easy to implement and does not require the setting of mutation scheme selection and mutation scale factor. Experimental results are conducted for a variety of objective functions with low and high dimensionality on the CEC Black- Box Optimization Benchmarking 2015 (CEC-BBOB 2015). The results show superior performance of the proposed algorithm compared to the other micro-DE algorithms. | Small-size cooperative sub-populations are capable of finding sub-components of the original problem concurrently @cite_34 . Combination of sub-components through their cooperation constructs a complete solution for the problem @cite_34 . A MDE version of this method is proposed to evolve an indirect representation of the bin packing problem @cite_17 . The idea of self-adaptive population size is carried out to test absolute and relative encoding methods for DE @cite_6 . The reported simulation results on 20 benchmark problems denote the self-adaptive population size with relative encoding outperforms the absolute encoding method and the DE algorithm @cite_6 . | {
"cite_N": [
"@cite_34",
"@cite_6",
"@cite_17"
],
"mid": [
"2588763176",
"2070205798",
"2275734807"
],
"abstract": [
"Differential evolution (DE) is one of the highest performance, easy to implement, and low complexity population-based optimization algorithms. Population initialization plays an important role in finding better candidate solution and faster convergence of the population to a global optimum. It has been shown in the literature that large population sizes for large-scale problems necessarily does not show a statistically significant performance improvement over medium size population. In this paper, we emphasise on importance of population initialization and discuss effects of using centroid-based population initialization in DE, with focus on micro-DE (i.e. DE with small population size). Experimental results for high and low dimensional problems with small and standard population sizes on CEC Black-Box Optimization Benchmark problems 2015 (CEC-BBOB 2015) show centroid initialization can increase performance of DE algorithm, compared to the conventional initialization method.",
"The study and research of evolutionary algorithms (EAs) is getting great attention in recent years. Although EAs have earned extensive acceptance through numerous successful applications in many fields, the problem of finding the best combination of evolutionary parameters especially for population size that need the manual settings by the user is still unresolved. In this paper, our system is focusing on differential evolution (DE) and its control parameters. To overcome the problem, two new systems were carried out for the self-adaptive population size to test two different methodologies (absolute encoding and relative encoding) in DE and compared their performances against the original DE. Fifty runs are conducted for every 20 well-known benchmark problems to test on every proposed algorithm in this paper to achieve the function optimization without explicit parameter tuning in DE. The empirical testing results showed that DE with self-adaptive population size using relative encoding performed well in terms of the average performance as well as stability compared to absolute encoding version as well as the original DE.",
"The development of low-level heuristics for solving instances of a problem is related to the knowledge of an expert. He needs to analyze several components from the problem instance and to think out an specialized heuristic for solving the instance. However if any inherent component to the instance gets changes, then the designed heuristic may not work as it used to do it. In this paper it is presented a novel approach to generated low-level heuristics; the proposed approach implements micro-Differential Evolution for evolving an indirect representation of the Bin Packing Problem. It was used the Hard28 instance, which is a well-known and referenced Bin Packing Problem instance. The heuristics obtained by the proposed approach were compared against the well know First-Fit heuristic, the results of packing that were gotten for each heuristic were analized by the statistic non-parametric test known as Wilcoxon Signed Rank test."
]
} |
1709.06871 | 2757454001 | This paper presents an evaluation of deep neural networks for recognition of digits entered by users on a smartphone touchscreen. A new large dataset of Arabic numerals was collected for training and evaluation of the network. The dataset consists of spatial and temporal touch data recorded for 80 digits entered by 260 users. Two neural network models were investigated. The first model was a 2D convolutional neural (ConvNet) network applied to bitmaps of the glpyhs created by interpolation of the sensed screen touches and its topology is similar to that of previously published models for offline handwriting recognition from scanned images. The second model used a 1D ConvNet architecture but was applied to the sequence of polar vectors connecting the touch points. The models were found to provide accuracies of 98.50 and 95.86 , respectively. The second model was much simpler, providing a reduction in the number of parameters from 1,663,370 to 287,690. The dataset has been made available to the community as an open source resource. | Most online character recognition research to date has been orchestrated with a pen as the writing implement @cite_3 @cite_2 @cite_4 @cite_8 . The first of these papers is the most similar to the work presented here. However, there are some notable differences. In order to reach the 96 This paper investigates the usefulness of neural network modeling for gesture detection and classification and proposes to combine an extracted scale invariant feature vector with a deep neural network employing time convolutional layers and recurrent neural layers. Various combinations of convolutional networks, fully connected networks, deep networks and RNN's were considered. It was found that combining time convolutional layers with the LSTM variant of RNN @cite_5 has proven to provide high accuracy while also providing a number of benefits which are not obtained by convolutional ConvNets. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_2",
"@cite_5"
],
"mid": [
"2165989034",
"2101117375",
"2057619148",
"2099070536",
""
],
"abstract": [
"The paper presents a feature extraction technique for online handwriting recognition. The technique incorporates many characteristics of handwritten characters based on structural, directional and zoning information and combines them to create a single global feature vector. The technique is independent to character size and it can extract features from the raw data without resizing. Using the proposed technique and a neural network based classifier, many experiments were conducted on UNIPEN benchmark database. The recognition rates are 98.2 for digits, 91.2 for uppercase and 91.4 for lowercase.",
"The selection of valuable features is crucial in pattern recognition. In this paper we deal with the issue that part of features originate from directional instead of common linear data. Both for directional and linear data a theory for a statistical modeling exists. However, none of these theories gives an integrated solution to problems, where linear and directional variables are to be combined in a single, multivariate probability density function. We describe a general approach for a unified statistical modeling, given the constraint that variances of the circular variables are small. The method is practically evaluated in the context of our online handwriting recognition system frog on hand and the so-called tangent slope angle feature. Recognition results are compared with two alternative modeling approaches. The proposed solution gives significant improvements in recognition accuracy, computational speed and memory requirements.",
"Abstract We describe a system which can recognize digits and uppercase letters handprinted on a touch terminal. A character is input as a sequence of [ x(t), y(t) ] coordinates, subjected to very simple preprocessing, and then classified by a trainable neural network. The classifier is analogous to “time delay neural networks” previously applied to speech recognition. The network was trained on a set of 12,000 digits and uppercase letters, from approximately 250 different writers, and tested on 2500 such characters from other writers. Classification accuracy exceeded 96 on the test examples.",
"We introduce a new approach for on-line recognition of handwritten words written in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution \"annotated images\" where each pixel contains information about trajectory direction and curvature. The recognizer is a convolution network that can be spatially replicated. From the network output, a hidden Markov model produces word scores. The entire system is globally trained to minimize word-level errors.",
""
]
} |
1709.06758 | 2756868207 | Abstract Background Clinical trial registries can be used to monitor the production of trial evidence and signal when systematic reviews become out of date. However, this use has been limited to date due to the extensive manual review required to search for and screen relevant trial registrations. Our aim was to evaluate a new method that could partially automate the identification of trial registrations that may be relevant for systematic review updates. Materials and methods We identified 179 systematic reviews of drug interventions for type 2 diabetes, which included 537 clinical trials that had registrations in ClinicalTrials.gov. Text from the trial registrations were used as features directly, or transformed using Latent Dirichlet Allocation (LDA) or Principal Component Analysis (PCA). We tested a novel matrix factorisation approach that uses a shared latent space to learn how to rank relevant trial registrations for each systematic review, comparing the performance to document similarity to rank relevant trial registrations. The two approaches were tested on a holdout set of the newest trials from the set of type 2 diabetes systematic reviews and an unseen set of 141 clinical trial registrations from 17 updated systematic reviews published in the Cochrane Database of Systematic Reviews. The performance was measured by the number of relevant registrations found after examining 100 candidates (recall@100) and the median rank of relevant registrations in the ranked candidate lists. Results The matrix factorisation approach outperformed the document similarity approach with a median rank of 59 (of 128,392 candidate registrations in ClinicalTrials.gov) and recall@100 of 60.9 using LDA feature representation, compared to a median rank of 138 and recall@100 of 42.8 in the document similarity baseline. In the second set of systematic reviews and their updates, the highest performing approach used document similarity and gave a median rank of 67 (recall@100 of 62.9 ). Conclusions A shared latent space matrix factorisation method was useful for ranking trial registrations to reduce the manual workload associated with finding relevant trials for systematic review updates. The results suggest that the approach could be used as part of a semi-automated pipeline for monitoring potentially new evidence for inclusion in a review update. | Past work on the use of matrix factorisation for collaborative filtering focused on increasing prediction accuracy by including neighbourhood information @cite_24 . Later, @cite_6 proposed SVD++, a matrix factorisation approach that unified neighbourhood and latent factors. @cite_41 proposed TrustSVD, an extension of SVD++ that incorporates social trust information to help mitigate data sparsity and the cold start problem. TrustSVD includes factorisation of two matrices that share a same latent space—meaning a matrix of user-item preference scores and another matrix that defines trust information among users. | {
"cite_N": [
"@cite_24",
"@cite_41",
"@cite_6"
],
"mid": [
"1992270714",
"2244405900",
"1994389483"
],
"abstract": [
"The collaborative filtering approach to recommender systems predicts user preferences for products or services by learning past user-item relationships. In this work, we propose novel algorithms for predicting user ratings of items by integrating complementary models that focus on patterns at different scales. At a local scale, we use a neighborhood-based technique that infers ratings from observed ratings by similar users or of similar items. Unlike previous local approaches, our method is based on a formal model that accounts for interactions within the neighborhood, leading to improved estimation quality. At a higher, regional, scale, we use SVD-like matrix factorization for recovering the major structural patterns in the user-item rating matrix. Unlike previous approaches that require imputations in order to fill in the unknown matrix entries, our new iterative algorithm avoids imputation. Because the models involve estimation of millions, or even billions, of parameters, shrinkage of estimated values to account for sampling variability proves crucial to prevent overfitting. Both the local and the regional approaches, and in particular their combination through a unifying model, compare favorably with other approaches and deliver substantially better results than the commercial Netflix Cinematch recommender system on a large publicly available data set.",
"Collaborative filtering suffers from the problems of data sparsity and cold start, which dramatically degrade recommendation performance. To help resolve these issues, we propose TrustSVD, a trust-based matrix factorization technique. By analyzing the social trust data from four real-world data sets, we conclude that not only the explicit but also the implicit influence of both ratings and trust should be taken into consideration in a recommendation model. Hence, we build on top of a state-of-the-art recommendation algorithm SVD++ which inherently involves the explicit and implicit influence of rated items, by further incorporating both the explicit and implicit influence of trusted users on the prediction of items for an active user. To our knowledge, the work reported is the first to extend SVD++ with social trust information. Experimental results on the four data sets demonstrate that our approach TrustSVD achieves better accuracy than other ten counterparts, and can better handle the concerned issues.",
"Recommender systems provide users with personalized suggestions for products or services. These systems often rely on Collaborating Filtering (CF), where past transactions are analyzed in order to establish connections between users and products. The two more successful approaches to CF are latent factor models, which directly profile both users and products, and neighborhood models, which analyze similarities between products or users. In this work we introduce some innovations to both approaches. The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model. Further accuracy improvements are achieved by extending the models to exploit both explicit and implicit feedback by the users. The methods are tested on the Netflix data. Results are better than those previously published on that dataset. In addition, we suggest a new evaluation metric, which highlights the differences among methods, based on their performance at a top-K recommendation task."
]
} |
1709.06652 | 2756856381 | This paper addresses the problem of formation control and tracking a of desired trajectory by an Euler-Lagrange multi-agent systems. It is inspired by recent results by and adopts an event-triggered control strategy to reduce the number of communications between agents. For that purpose, to evaluate its control input, each agent maintains estimators of the states of the other agents. Communication is triggered when the discrepancy between the actual state of an agent and the corresponding estimate reaches some threshold. The impact of additive state perturbations on the formation control is studied. A condition for the convergence of the multi-agent system to a stable formation is studied. Simulations show the e ectiveness of the proposed approach. | Most of the event-triggered approaches have been applied in the context of consensus in MAS @cite_31 @cite_38 @cite_30 . This paper focuses on distributed formation control, which has been considered in @cite_18 @cite_13 @cite_36 . Formation control consists in driving and maintaining all agents of a MAS to some reference, possibly time-varying configuration, defining, @math , their relative positions, orientations, and speeds. Various approaches have been considered, such as behavior-based flocking @cite_17 @cite_39 @cite_32 @cite_40 @cite_44 , or formation tracking @cite_29 @cite_6 @cite_43 @cite_20 @cite_10 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_18",
"@cite_36",
"@cite_10",
"@cite_29",
"@cite_32",
"@cite_6",
"@cite_39",
"@cite_44",
"@cite_43",
"@cite_40",
"@cite_31",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"1527116633",
"2166132612",
"1600762532",
"1580977359",
"2040900884",
"2156998846",
"2105850748",
"2014776084",
"",
"",
"",
"",
"2167183308",
"2114755150",
"2142714685",
"2112613443"
],
"abstract": [
"An event-triggered control technique for consensus of multi-agent systems with general linear dynamics is presented. This paper extends previous work to consider agents that are connected using directed graphs. Additionally, the approach shown here provides asymptotic consensus with guaranteed positive inter-event time intervals. This event-triggered control method is also used in the case where communication delays are present. For the communication delay case we also show that the agents achieve consensus asymptotically and that, for every agent, the time intervals between consecutive transmissions is lower-bounded by a positive constant.",
"A novel control strategy for multi-agent coordination with event-based broadcasting is presented. In particular, each agent decides itself when to transmit its current state to its neighbors and the local control laws are based on these sampled state measurements. Three scenarios are analyzed: Networks of single-integrator agents with and without communication delays, and networks of double-integrator agents. The novel event-based scheduling strategy bounds each agent's measurement error by a time-dependent threshold. For each scenario it is shown that the proposed control strategy guarantees either asymptotic convergence to average consensus or convergence to a ball centered at the average consensus. Moreover, it is shown that the inter-event intervals are lower-bounded by a positive constant. Numerical simulations show the effectiveness of the novel event-based control strategy and how it compares to time-scheduled control.",
"Existing results on distance-based rigid formation stabilization commonly require continuous measurements and control update. In this paper, we consider an event-triggered scheme for distance-based formation control problem. We prove the local exponential stability of this event-driven system for stabilizing formations which are infinitesimally and minimally rigid. Furthermore, we prove that Zeno-behavior does not exist and obtain a strict positive lower bound in the inter-event time. Simulations are also provided to verify the proposed controller.",
"This paper is focused on the formation control of multi-agent systems (MAS). A new control law based on event-driven strategies is presented. The agents with any initial position can reach the desired formation and target under such control law. Formability of MAS depends on several key factors: the agents' dynamic structures, the connectivity topology, the properties of the desired formation and the admissible control set. The control actuation updates considered in this paper are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state.",
"Built on the combined strength of decentralized control and the recently introduced virtual structure approach, a decentralized formation scheme for spacecraft formation flying is presented. Following a decentralized coordination architecture via the virtual structure approach, decentralized formation control strategies are introduced, which are appropriate when a large number of spacecraft are involved and or stringent interspacecraft communication limitations are exerted. The effectiveness of the proposed control strategies is demonstrated through simulation results.",
"A constructive method is presented to design cooperative controllers that force a group of N unicycle-type mobile robots with limited sensing ranges to perform desired formation tracking and guarantee no collisions between the robots. Physical dimensions and dynamics of the robots are also considered in the control design. Smooth and p times differential bump functions are introduced and incorporated into novel potential functions to design a formation tracking control system. Despite the robot limited sensing ranges, no switchings are needed to solve the collision avoidance problem. Simulations illustrate the results.",
"In this paper, we present a theoretical framework for design and analysis of distributed flocking algorithms. Two cases of flocking in free-space and presence of multiple obstacles are considered. We present three flocking algorithms: two for free-flocking and one for constrained flocking. A comprehensive analysis of the first two algorithms is provided. We demonstrate the first algorithm embodies all three rules of Reynolds. This is a formal approach to extraction of interaction rules that lead to the emergence of collective behavior. We show that the first algorithm generically leads to regular fragmentation, whereas the second and third algorithms both lead to flocking. A systematic method is provided for construction of cost functions (or collective potentials) for flocking. These collective potentials penalize deviation from a class of lattice-shape objects called spl alpha -lattices. We use a multi-species framework for construction of collective potentials that consist of flock-members, or spl alpha -agents, and virtual agents associated with spl alpha -agents called spl beta - and spl gamma -agents. We show that migration of flocks can be performed using a peer-to-peer network of agents, i.e., \"flocks need no leaders.\" A \"universal\" definition of flocking for particle systems with similarities to Lyapunov stability is given. Several simulation results are provided that demonstrate performing 2-D and 3-D flocking, split rejoin maneuver, and squeezing maneuver for hundreds of agents using the proposed algorithms.",
"We designed a distributed collision-free formation flight control law in the framework of nonlinear model predictive control. Formation configuration is determined in the virtual reference point coordinate system. Obstacle avoidance is guaranteed by cost penalty, and intervehicle collision avoidance is guaranteed by cost penalty combined with a new priority strategy.",
"",
"",
"",
"",
"Event-driven strategies for multi-agent systems are motivated by the future use of embedded microprocessors with limited resources that will gather information and actuate the individual agent controller updates. The controller updates considered here are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state, and are applied to a first order agreement problem. A centralized formulation is considered first and then its distributed counterpart, in which agents require knowledge only of their neighbors' states for the controller implementation. The results are then extended to a self-triggered setup, where each agent computes its next update time at the previous one, without having to keep track of the state error that triggers the actuation between two consecutive update instants. The results are illustrated through simulation examples.",
"This paper discusses generalized controllers for rigid formation shape stabilization. We provide unified analysis to show convergence using different controllers reported in the literature, and further prove an exponential stability of the formation system when using the general form of shape controllers. We also show that different agents can use different controllers for controlling different distances to achieve a desired rigid formation, which enables the implementation of heterogeneous agents in practice for formation shape control. We further propose an event-triggered rigid formation control scheme based on the generalized controllers. The triggering condition, event function and convergence analysis are discussed.",
"In this note, we study a distributed coordinated tracking problem for multiple networked Euler-Lagrange systems. The objective is for a team of followers modeled by full-actuated Euler-Lagrange equations to track a dynamic leader whose vector of generalized coordinates is time varying under the constraints that the leader is a neighbor of only a subset of the followers and the followers have only local interaction. We consider two cases: i) The leader has a constant vector of generalized coordinate derivatives, and ii) The leader has a varying vector of generalized coordinate derivatives. In the first case, we propose a distributed continuous estimator and an adaptive control law to account for parametric uncertainties. In the second case, we propose a model-independent sliding mode control algorithm. Simulation results on multiple networked two-link revolute joint arms are provided to show the effectiveness of the proposed control algorithms.",
""
]
} |
1709.06652 | 2756856381 | This paper addresses the problem of formation control and tracking a of desired trajectory by an Euler-Lagrange multi-agent systems. It is inspired by recent results by and adopts an event-triggered control strategy to reduce the number of communications between agents. For that purpose, to evaluate its control input, each agent maintains estimators of the states of the other agents. Communication is triggered when the discrepancy between the actual state of an agent and the corresponding estimate reaches some threshold. The impact of additive state perturbations on the formation control is studied. A condition for the convergence of the multi-agent system to a stable formation is studied. Simulations show the e ectiveness of the proposed approach. | Different formation-tracking methods have been considered. In leader-follower techniques @cite_29 @cite_6 @cite_43 @cite_20 , based on mission goals, a trajectory is designed only for some leader agent. The other follower agents, aim at tracking the leader as well as maintaining some target formation defined with respect to the leader. A virtual leader has been considered in @cite_0 @cite_6 @cite_25 to gain robustness to leader failure. This requires a good synchronization among agents of the state of the virtual leader. Virtual structures have been introduced in @cite_10 @cite_33 , where the agent control is designed to satisfy constraints between neighbours. Such approaches also address the problem of leader failure. In distance-based control, the constraints are distances between agents. In displacement-based control, relative coordinate or speed vectors between agents are imposed. In tensegrity structures @cite_4 @cite_11 additional flexibility in the structure is considered by considering attraction and repulsion terms between agents, as formalized by @cite_3 . In addition to constraints on the structure of the MAS, @cite_16 imposes some reference trajectory to each agent. In most of these works, permanent communication between agents is assumed. | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_43",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"1571564185",
"2020123713",
"2156998846",
"2014776084",
"",
"2066687901",
"",
"2144412323",
"2040900884",
"",
"2142714685",
"1631079150"
],
"abstract": [
"This paper investigates the distributed formation control problem for a group of mobile Euler-Lagrange agents to achieve global stabilization by using virtual tensegrity structures. Firstly, a systematic approach to design tensegrity frameworks is elaborately explained to confine the interaction relationships between agents, which allows us to obtain globally rigid frameworks. Then, based on virtual tensegrity frameworks, distributed control strategies are developed such that the mobile agents converge to the desired formation globally. The theoretical analysis is further validated through simulations.",
"In the study of task coordination for multiagent systems, formation control has received considerable attention due to its potential applications in civil and or military prac-tices. Fundamentally, formation control problem for multiagent systems can be formulated as making a group of agents follow the desired trajectory while maintaining certain prescribed geometric distances among agents. In this paper, we consider the formation control problem for mobile robots with nonlinear dynamics and moving in a 2D environment. To address the inherent challenges due to nonlinear system dynamics and agents' limited sensing communication capabilities, we instill an idea of integrating the recently developed distributed consensus theory into the standard feedback control, and propose a new time-varying cooperative control strategy to solve the formation control problem for multiagent systems. In particular, the proposed design only requires the local and intermittent information exchange among agents to achieve the formation control objective. More importantly, we remove the restriction on the need of the desired trajectory for every agent, and instead design a distributed observer for obtaining the desired trajectory in order to establish the formation in the design. The overall distributed formation control system stability is rigorously proved by using a contraction mapping method under the condition that the sensing communication network among robots is sequentially complete. Simulation is provided to validate the effectiveness of the proposed design.",
"A constructive method is presented to design cooperative controllers that force a group of N unicycle-type mobile robots with limited sensing ranges to perform desired formation tracking and guarantee no collisions between the robots. Physical dimensions and dynamics of the robots are also considered in the control design. Smooth and p times differential bump functions are introduced and incorporated into novel potential functions to design a formation tracking control system. Despite the robot limited sensing ranges, no switchings are needed to solve the collision avoidance problem. Simulations illustrate the results.",
"We designed a distributed collision-free formation flight control law in the framework of nonlinear model predictive control. Formation configuration is determined in the virtual reference point coordinate system. Obstacle avoidance is guaranteed by cost penalty, and intervehicle collision avoidance is guaranteed by cost penalty combined with a new priority strategy.",
"",
"A collision-free formation flight controller for unmanned aerial vehicle (UAV) is designed in the framework of nonlinear model predictive control (MPC). It can consider control input saturation and state constraints explicitly. Formation configuration is determined based on virtual reference point method, which has no error propagation in the formation. The formation flight controller is designed in a distributed way. Based on the tracking error, the objective function for each UAV is designed in the nonlinear MPC framework. A new type of cost function, based on the UAV's velocity orientation and relative distance between UAV and obstacle, is added to the objective function to guarantee obstacle avoidance. Inter-vehicle collision avoidance is also ensured by cost function combined with a priority strategy. The nonlinear optimization problem is solved by the filter-SQP method, which has the better convergence and numeric. Simulation results are provided to evaluate the performance of the designed collision-free formation flight controller.",
"",
"In this paper, we present a synchronization approach to trajectory tracking of multiple mobile robots while maintaining time-varying formations. The main idea is to control each robot to track its desired trajectory while synchronizing its motion with those of other robots to keep relative kinematics relationships, as required by the formation. First, we pose the formation-control problem as a synchronization control problem and identify the synchronization control goal according to the formation requirement. The formation error is measured by the position synchronization error, which is defined based on the established robot network. Second, we develop a synchronous controller for each robot's translation to guarantee that both position and synchronization errors approach zero asymptotically. The rotary controller is also designed to ensure that the robot is always oriented toward its desired position. Both translational and rotary controls are supported by a centralized high-level planer for task monitoring and robot global localization. Finally, we perform simulations and experiments to demonstrate the effectiveness of the proposed synchronization control approach in the formation control tasks.",
"Built on the combined strength of decentralized control and the recently introduced virtual structure approach, a decentralized formation scheme for spacecraft formation flying is presented. Following a decentralized coordination architecture via the virtual structure approach, decentralized formation control strategies are introduced, which are appropriate when a large number of spacecraft are involved and or stringent interspacecraft communication limitations are exerted. The effectiveness of the proposed control strategies is demonstrated through simulation results.",
"",
"In this note, we study a distributed coordinated tracking problem for multiple networked Euler-Lagrange systems. The objective is for a team of followers modeled by full-actuated Euler-Lagrange equations to track a dynamic leader whose vector of generalized coordinates is time varying under the constraints that the leader is a neighbor of only a subset of the followers and the followers have only local interaction. We consider two cases: i) The leader has a constant vector of generalized coordinate derivatives, and ii) The leader has a varying vector of generalized coordinate derivatives. In the first case, we propose a distributed continuous estimator and an adaptive control law to account for parametric uncertainties. In the second case, we propose a model-independent sliding mode control algorithm. Simulation results on multiple networked two-link revolute joint arms are provided to show the effectiveness of the proposed control algorithms.",
"Using dynamic models of tensegrity structures, we derive provable, distributed control laws for stabilizing and changing the shape of a formation of vehicles in the plane. Tensegrity models define the desired, controlled, multi-vehicle system dynamics, where each node in the tensegrity structure maps to a vehicle and each interconnecting strut or cable in the structure maps to a virtual interconnection between vehicles. Our method provides a smooth map from any desired planar formation shape to a planar tensegrity structure. The stabilizing vehicle formation shape control laws are then given by the forces between nodes in the corresponding tensegrity model. The smooth map makes possible provably well behaved changes of formation shape over a prescribed time interval. A designed path in shape space is mapped to a path in the parametrized space of tensegrity structures and the vehicle formation tracks this path with forces derived from the time-varying tensegrity model. By means of examples, we illustrate the influence of design parameters on performance measures."
]
} |
1709.06652 | 2756856381 | This paper addresses the problem of formation control and tracking a of desired trajectory by an Euler-Lagrange multi-agent systems. It is inspired by recent results by and adopts an event-triggered control strategy to reduce the number of communications between agents. For that purpose, to evaluate its control input, each agent maintains estimators of the states of the other agents. Communication is triggered when the discrepancy between the actual state of an agent and the corresponding estimate reaches some threshold. The impact of additive state perturbations on the formation control is studied. A condition for the convergence of the multi-agent system to a stable formation is studied. Simulations show the e ectiveness of the proposed approach. | Some recent works combine event-triggered approaches with distance-based or displacement-based formation control @cite_18 @cite_13 @cite_36 . In these works, the dynamics of the agents are described by a simple integrator, with control input considered constant between two communications. The proposed CTCs consider different threshold formulations and require each agent to have access to the state of all other agents. A constant threshold is considered in @cite_13 . A time-varying threshold is introduced in @cite_18 @cite_36 . The CTC depends then on the relative positions between agents and the relative discrepancy between actual and estimated agent states. These CTCs reduce the number of triggered communications when the system converges to the desired formation. A minimal time between two communications, named inter-event time, is also defined. Finally, in all these works, no perturbations are considered. | {
"cite_N": [
"@cite_36",
"@cite_18",
"@cite_13"
],
"mid": [
"1580977359",
"1600762532",
"2114755150"
],
"abstract": [
"This paper is focused on the formation control of multi-agent systems (MAS). A new control law based on event-driven strategies is presented. The agents with any initial position can reach the desired formation and target under such control law. Formability of MAS depends on several key factors: the agents' dynamic structures, the connectivity topology, the properties of the desired formation and the admissible control set. The control actuation updates considered in this paper are event-driven, depending on the ratio of a certain measurement error with respect to the norm of a function of the state.",
"Existing results on distance-based rigid formation stabilization commonly require continuous measurements and control update. In this paper, we consider an event-triggered scheme for distance-based formation control problem. We prove the local exponential stability of this event-driven system for stabilizing formations which are infinitesimally and minimally rigid. Furthermore, we prove that Zeno-behavior does not exist and obtain a strict positive lower bound in the inter-event time. Simulations are also provided to verify the proposed controller.",
"This paper discusses generalized controllers for rigid formation shape stabilization. We provide unified analysis to show convergence using different controllers reported in the literature, and further prove an exponential stability of the formation system when using the general form of shape controllers. We also show that different agents can use different controllers for controlling different distances to achieve a desired rigid formation, which enables the implementation of heterogeneous agents in practice for formation shape control. We further propose an event-triggered rigid formation control scheme based on the generalized controllers. The triggering condition, event function and convergence analysis are discussed."
]
} |
1709.06652 | 2756856381 | This paper addresses the problem of formation control and tracking a of desired trajectory by an Euler-Lagrange multi-agent systems. It is inspired by recent results by and adopts an event-triggered control strategy to reduce the number of communications between agents. For that purpose, to evaluate its control input, each agent maintains estimators of the states of the other agents. Communication is triggered when the discrepancy between the actual state of an agent and the corresponding estimate reaches some threshold. The impact of additive state perturbations on the formation control is studied. A condition for the convergence of the multi-agent system to a stable formation is studied. Simulations show the e ectiveness of the proposed approach. | LBC techniques have been introduced in @cite_7 @cite_42 @cite_14 @cite_28 to reduce the number of communications in trajectory tracking problems. MAS with decoupled nonlinear agent dynamics are considered in @cite_7 @cite_14 . Agents have to follow parametrized paths, designed in a centralized way. CTCs introduced by LBC lead all agents to follow the paths in a synchronized way to set up a desired formation. Communication delays, as well as packet losses are considered. Nevertheless, if input-to-state stability conditions are established, absence of Zeno behavior is not analyzed. | {
"cite_N": [
"@cite_28",
"@cite_14",
"@cite_42",
"@cite_7"
],
"mid": [
"2103013786",
"2015084328",
"",
"2085273584"
],
"abstract": [
"Describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant saving in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.",
"We address the problem of designing decentralized feedback laws to force the outputs of decoupled nonlinear systems (agents) to follow geometric paths while holding a desired formation pattern. To this effect we propose a general framework that takes into account i) the topology of the communication links among the agents, ii) the fact that communications do not occur in a continuous manner, and iii) the cost of exchanging information. We provide conditions under which the resulting overall closed loop system is input-to-state stable and apply the methodology for two cases: agents with nonlinear dynamics in strict feedback form and a class of underactuated vehicles. Furthermore, we address explicitly the case where the communications among the agents occur with non-homogenous, possibly varying delays. A coordinated path-following algorithm is derived for multiple underactuated autonomous underwater vehicles. Simulation results are presented and discussed.",
"",
"We introduce an event driven communication logic for decentralized control of a network of robotic vehicles (agents). The strategy proposed is robust to packet losses and drives the vehicles to predefined paths while holding a desired geometric formation pattern. To this effect, the paper extends an existing cooperative path following framework to consider the practical case where communications among the vehicles occur at discrete instants, instead of continuously. The introduced communication logic takes into account the topology of the communication network, the fact that communications are discrete, and the cost of exchanging information. We also address explicitly communication losses and bounded delays. Conditions are derived under which the overall closed loop system is input-to-state practically stable. The communication logic is applied to a cooperative path-following control system of multiple underactuated autonomous marine robots. Simulation results are presented and discussed."
]
} |
1709.06916 | 2759169703 | Popular User-Review Social Networks (URSNs)---such as Dianping, Yelp, and Amazon---are often the targets of reputation attacks in which fake reviews are posted in order to boost or diminish the ratings of listed products and services. These attacks often emanate from a collection of accounts, called Sybils, which are collectively managed by a group of real users. A new advanced scheme, which we term elite Sybil attacks, recruits organically highly-rated accounts to generate seemingly-trustworthy and realistic-looking reviews. These elite Sybil accounts taken together form a large-scale sparsely-knit Sybil network for which existing Sybil fake-review defense systems are unlikely to succeed. In this paper, we conduct the first study to define, characterize, and detect elite Sybil attacks. We show that contemporary elite Sybil attacks have a hybrid architecture, with the first tier recruiting elite Sybil workers and distributing tasks by Sybil organizers, and with the second tier posting fake reviews for profit by elite Sybil workers. We design ElsieDet, a three-stage Sybil detection scheme, which first separates out suspicious groups of users, then identifies the campaign windows, and finally identifies elite Sybil users participating in the campaigns. We perform a large-scale empirical study on ten million reviews from Dianping, by far the most popular URSN service in China. Our results show that reviews from elite Sybil users are more spread out temporally, craft more convincing reviews, and have higher filter bypass rates. We also measure the impact of Sybil campaigns on various industries (such as cinemas, hotels, restaurants) as well as chain stores, and demonstrate that monitoring elite Sybil users over time can provide valuable early alerts against Sybil campaigns. | The advantage of behavioral patterns is that these can be easily encoded in features and adopted with machine learning techniques to learn the signature of user profiles and user-level activities. Different classes of features are commonly employed to capture orthogonal dimensions of users' behaviors @cite_30 @cite_9 @cite_20 @cite_35 @cite_55 @cite_22 . Other work @cite_41 @cite_15 @cite_36 considers the associated content information, such as reviews context, wall posts, hashtags, and URLs, to filter Sybil users. Specifically, the Facebook immune system @cite_9 detects Sybil users based on features characterized from user profiles and activities. COMPA @cite_20 is designed to uncover compromised accounts via sudden change alerts according to the behavioral patterns of users. In addition to user profile, Song @cite_35 proposed a target-based detection on Twitter approach which bases on features of retweets. However, feature-based approaches are relatively easy to circumvent by adversarial attacks @cite_13 @cite_53 @cite_46 @cite_1 . Further work will also be needed to detect sophisticated strategies exhibiting a mixture of realistic and Sybil users features. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_22",
"@cite_41",
"@cite_36",
"@cite_9",
"@cite_55",
"@cite_53",
"@cite_46",
"@cite_1",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"",
"2192783609",
"2098395374",
"2163764145",
"",
"1781642226",
"",
"",
"2097860933",
"2108214308",
"2095195675",
""
],
"abstract": [
"",
"",
"Although opinion spam (or fake review) detection has attracted significant research attention in recent years, the problem is far from solved. One key reason is that there is no large-scale ground truth labeled dataset available for model building. Some review hosting sites such as Yelp.com and Dianping.com have built fake review filtering systems to ensure the quality of their reviews, but their algorithms are trade secrets. Working with Dianping, we present the first large-scale analysis of restaurant reviews filtered by Dianping's fake review filtering system. Along with the analysis, we also propose some novel temporal and spatial features for supervised opinion spam detection. Our results show that these features significantly outperform existing state-of-art features.",
"In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77 of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9 of accounts form social relationships with regular Twitter users. Instead, 17 of accounts rely on hijacking trends, while 52 of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.",
"On the heels of the widespread adoption of web services such as social networks and URL shorteners, scams, phishing, and malware have become regular threats. Despite extensive research, email-based spam filtering techniques generally fall short for protecting other web services. To better address this need, we present Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. We evaluate the viability of Monarch and the fundamental challenges that arise due to the diversity of web service spam. We show that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services. In particular, we find that spam targeting email qualitatively differs in significant ways from spam campaigns targeting Twitter. We explore the distinctions between email and Twitter spam, including the abuse of public web hosting and redirector services. Finally, we demonstrate Monarch's scalability, showing our system could protect a service such as Twitter -- which needs to process 15 million URLs day -- for a bit under $800 day.",
"",
"Fake identities and Sybil accounts are pervasive in today's online communities. They are responsible for a growing number of threats, including fake product reviews, malware and spam on social networks, and astroturf political campaigns. Unfortunately, studies show that existing tools such as CAPTCHAs and graph-based Sybil detectors have not proven to be effective defenses. In this paper, we describe our work on building a practical system for detecting fake identities using server-side clickstream models. We develop a detection approach that groups \"similar\" user clickstreams into behavioral clusters, by partitioning a similarity graph that captures distances between clickstream sequences. We validate our clickstream models using ground-truth traces of 16,000 real and Sybil users from Renren, a large Chinese social network with 220M users. We propose a practical detection system based on these models, and show that it provides very high detection accuracy on our clickstream traces. Finally, we worked with collaborators at Renren and LinkedIn to test our prototype on their server-side data. Following positive results, both companies have expressed strong interest in further experimentation and possible internal deployment.",
"",
"",
"The standard assumption of identically distributed training and test data is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for example, in the context of email spam filtering. Here, email service providers employ spam filters, and spam senders engineer campaign templates to achieve a high rate of successful deliveries despite the filters. We model the interaction between the learner and the data generator as a static game in which the cost functions of the learner and the data generator are not necessarily antagonistic. We identify conditions under which this prediction game has a unique Nash equilibrium and derive algorithms that find the equilibrial prediction model. We derive two instances, the Nash logistic regression and the Nash support vector machine, and empirically explore their properties in a case study on email spam filtering.",
"Spam filters often use the reputation of an IP address (or IP address range) to classify email senders. This approach worked well when most spam originated from senders with fixed IP addresses, but spam today is also sent from IP addresses for which blacklist maintainers have outdated or inaccurate information (or no information at all). Spam campaigns also involve many senders, reducing the amount of spam any particular IP address sends to a single domain; this method allows spammers to stay \"under the radar\". The dynamism of any particular IP address begs for blacklisting techniques that automatically adapt as the senders of spam change. This paper presents SpamTracker, a spam filtering system that uses a new technique called behavioral blacklisting to classify email senders based on their sending behavior rather than their identity. Spammers cannot evade SpamTracker merely by using \"fresh\" IP addresses because blacklisting decisions are based on sending patterns, which tend to remain more invariant. SpamTracker uses fast clustering algorithms that react quickly to changes in sending patterns. We evaluate SpamTracker's ability to classify spammers using email logs for over 115 email domains; we find that SpamTracker can correctly classify many spammers missed by current filtering techniques. Although our current datasets prevent us from confirming SpamTracker's ability to completely distinguish spammers from legitimate senders, our evaluation shows that SpamTracker can identify a significant fraction of spammers that current IP-based blacklists miss. SpamTracker's ability to identify spammers before existing blacklists suggests that it can be used in conjunction with existing techniques (e.g., as an input to greylisting). SpamTracker is inherently distributed and can be easily replicated; incorporating it into existing email filtering infrastructures requires only small modifications to mail server configurations.",
"Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion, and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection. In this paper, we provide a more detailed investigation of this aspect, shedding some light on the security properties of feature selection against evasion attacks. Inspired by previous work on adversary-aware classifiers, we propose a novel adversary-aware feature selection model that can improve classifier security against evasion attacks, by incorporating specific assumptions on the adversary's data manipulation strategy. We focus on an efficient, wrapper-based implementation of our approach, and experimentally validate its soundness on different application examples, including spam and malware detection.",
""
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | Dey, M ' e moli and Wang @cite_13 @cite_30 study the structure and stability of a stable signature for what they called , which uses a hierarchy of covers instead of a single one. However, it is not clear how to translate their findings to the context of the original Mapper. | {
"cite_N": [
"@cite_30",
"@cite_13"
],
"mid": [
"2951040153",
"2262482943"
],
"abstract": [
"Data analysis often concerns not only the space where data come from, but also various types of maps attached to data. In recent years, several related structures have been used to study maps on data, including Reeb spaces, mappers and multiscale mappers. The construction of these structures also relies on the so-called of a cover of the domain. In this paper, we aim to analyze the topological information encoded in these structures in order to provide better understanding of these structures and facilitate their practical usage. More specifically, we show that the one-dimensional homology of the nerve complex @math of a path-connected cover @math of a domain @math cannot be richer than that of the domain @math itself. Intuitively, this result means that no new @math -homology class can be \"created\" under a natural map from @math to the nerve complex @math . Equipping @math with a pseudometric @math , we further refine this result and characterize the classes of @math that may survive in the nerve complex using the notion of of the covering elements in @math . These fundamental results about nerve complexes then lead to an analysis of the @math -homology of Reeb spaces, mappers and multiscale mappers. The analysis of @math -homology groups unfortunately does not extend to higher dimensions. Nevertheless, by using a map-induced metric, establishing a Gromov-Hausdorff convergence result between mappers and the domain, and interleaving relevant modules, we can still analyze the persistent homology groups of (multiscale) mappers to establish a connection to Reeb spaces.",
"Summarizing topological information from datasets and maps defined on them is a central theme in topological data analysis. Mapper, a tool for such summarization, takes as input both a possibly high dimensional dataset and a map defined on the data, and produces a summary of the data by using a cover of the codomain of the map. This cover, via a pullback operation to the domain, produces a simplicial complex connecting the data points. The resulting view of the data through a cover of the codomain offers flexibility in analyzing the data. However, it offers only a view at a fixed scale at which the cover is constructed. Inspired by the concept, we explore a notion of a tower of covers which induces a tower of simplicial complexes connected by simplicial maps, which we call multiscale mapper. We study the resulting structure, and design practical algorithms to compute its persistence diagrams efficiently. Specifically, when the domain is a simplicial complex and the map is a real-valued piecewise-linear function, the algorithm can compute the exact persistence diagram only from the 1-skeleton of the input complex. For general maps, we present a combinatorial version of the algorithm that acts only on vertex sets connected by the 1-skeleton graph, and this algorithm approximates the exact persistence diagram thanks to a stability result that we show to hold."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | Jeitziner, Carri ' e re, Rougemont, Oudot, Hess and Brisken @cite_4 develop a two-tier version of Mapper applied to clustering gene-expression data in order to identify subgroups. Their version of Mapper is tailored specifically to the type of data for which it was intended and does not require any user choices. Within its intended regime, this version of Mapper is stable. It is not clear at this stage, however, how to extend it to other contexts. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2781962302"
],
"abstract": [
"There is a growing need for unbiased clustering methods, ideally automated. We have developed a topology-based analysis tool called Two-Tier Mapper (TTMap) to detect subgroups in global gene expression datasets and identify their distinguishing features. First, TTMap discerns and adjusts for highly variable features in the control group and identifies outliers. Second, the deviation of each test sample from the control group in a high-dimensional space is computed and the test samples are clustered in a global and local network using a new topological algorithm based on Mapper. Validation of TTMap on both synthetic and biological datasets shows that it outperforms current clustering methods in sensitivity and stability; clustering is not affected by removal of samples from the control group, choice of normalization nor subselection of data. There is no user induced bias because all parameters are data-driven. Datasets can readily be combined into one analysis. TTMap reveals hitherto undetected gene expression changes in mouse mammary glands related to hormonal changes during the estrous cycle. This illustrates the ability to extract information from highly variable biological samples and its potential for personalized medicine."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | D otko @cite_20 sets out a procedure to generate Mapper covers by balls centred around selected points in the data. Once a cover is chosen a sequence of multiscale covers are obtained by expanding the ball sizes. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2261714614"
],
"abstract": [
"Data-driven discovery in complex neurological disorders has potential to extract meaningful syndromic knowledge from large, heterogeneous data sets to enhance potential for precision medicine. Here we describe the application of topological data analysis (TDA) for data-driven discovery in preclinical traumatic brain injury (TBI) and spinal cord injury (SCI) data sets mined from the Visualized Syndromic Information and Outcomes for Neurotrauma-SCI (VISION-SCI) repository. Through direct visualization of inter-related histopathological, functional and health outcomes, TDA detected novel patterns across the syndromic network, uncovering interactions between SCI and co-occurring TBI, as well as detrimental drug effects in unpublished multicentre preclinical drug trial data in SCI. TDA also revealed that perioperative hypertension predicted long-term recovery better than any tested drug after thoracic SCI in rats. TDA-based data-driven discovery has great potential application for decision-support for basic research and clinical problems such as outcome assessment, neurocritical care, treatment planning and rapid, precision-diagnosis."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | The work of Carri e re, Michel and Oudot @cite_11 represent ideas most similar to the present paper. Carri e re and Oudot @cite_31 provide bounds on the stability of Mapper in a deterministic setting on manifolds by comparing it to the Reeb graph. This is achieved though a feature set obtained from an extended persistence diagram of the Mapper graph with respect to the filter function. In particular, the features correspond to loops and flairs in Mapper graph. Through further statistical analysis @cite_11 , bounds are determined on the expectation of the bottleneck distance between the features of the Mapper and Reeb graphs, assuming points are sampled from and underlying manifold. This provides a way to obtain confidence regions for features on the persistence diagram that may be used to identify reliable Mapper outputs. | {
"cite_N": [
"@cite_31",
"@cite_11"
],
"mid": [
"2963771441",
"2963606893"
],
"abstract": [
"Given a continuous function f:X->R and a cover I of its image by intervals, the Mapper is the nerve of a refinement of the pullback cover f^ -1 (I). Despite its success in applications, little is known about the structure and stability of this construction from a theoretical point of view. As a pixelized version of the Reeb graph of f, it is expected to capture a subset of its features (branches, holes), depending on how the interval cover is positioned with respect to the critical values of the function. Its stability should also depend on this positioning. We propose a theoretical framework relating the structure of the Mapper to that of the Reeb graph, making it possible to predict which features will be present and which will be absent in the Mapper given the function and the cover, and for each feature, to quantify its degree of (in-)stability. Using this framework, we can derive guarantees on the structure of the Mapper, on its stability, and on its convergence to the Reeb graph as the granularity of the cover I goes to zero.",
"In this article, we study the question of the statistical convergence of the 1-dimensional Mapper to its continuous analogue, the Reeb graph. We show that the Mapper is an optimal estimator of the Reeb graph, which gives, as a byproduct, a method to automatically tune its parameters and compute confidence regions on its topological features, such as its loops and flares. This allows to circumvent the issue of testing a large grid of parameters and keeping the most stable ones in the brute-force setting, which is widely used in visualization, clustering and feature selection with the Mapper."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | Our approach provides a more general setting than that of @cite_11 . Points are only assumed to be sampled from an underlying probability distribution rather than a distribution on a smooth manifold. Furthermore the required covers may be chosen arbitrarily rather than being restricted to arising from an interval cover and filter function. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2963606893"
],
"abstract": [
"In this article, we study the question of the statistical convergence of the 1-dimensional Mapper to its continuous analogue, the Reeb graph. We show that the Mapper is an optimal estimator of the Reeb graph, which gives, as a byproduct, a method to automatically tune its parameters and compute confidence regions on its topological features, such as its loops and flares. This allows to circumvent the issue of testing a large grid of parameters and keeping the most stable ones in the brute-force setting, which is widely used in visualization, clustering and feature selection with the Mapper."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | Despite the ubiquity of clustering techniques within unsupervised learning, it has proved difficult to establish a good theoretical foundation for this methodology. A lot of effort has been devoted to the study of quality and stability of clustering. Highlights include the famous impossibility theorem of Kleinberg @cite_15 , who proved that there is no clustering procedure satisfying all of his natural axioms. This was taken up by Carlsson and M ' e mmoli @cite_34 , who proposed an axiomatic approach allowing them to provide an existence and uniqueness result for single-linkage clustering. More recently, Strazzeri and S ' a nchez - Garc ' a @cite_19 provided a clustering procedure that satisfies Kleinberg's axioms after an alteration of the consistency axiom. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_34"
],
"mid": [
"2808192946",
"2111004121",
"2165879774"
],
"abstract": [
"Kleinberg introduced three natural clustering properties, or axioms, and showed they cannot be simultaneously satisfied by any clustering algorithm. We present a new clustering property, Monotonic Consistency, which avoids the well-known problematic behaviour of Kleinberg's Consistency axiom, and the impossibility result. Namely, we describe a clustering algorithm, Morse Clustering, inspired by Morse Theory in Differential Topology, which satisfies Kleinberg's original axioms with Consistency replaced by Monotonic Consistency. Morse clustering uncovers the underlying flow structure on a set or graph and returns a partition into trees representing basins of attraction of critical vertices. We also generalise Kleinberg's axiomatic approach to sparse graphs, showing an impossibility result for Consistency, and a possibility result for Monotonic Consistency and Morse clustering.",
"Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median.",
"We study hierarchical clustering schemes under an axiomatic view. We show that within this framework, one can prove a theorem analogous to one of Kleinberg (2002), in which one obtains an existence and uniqueness theorem instead of a non-existence result. We explore further properties of this unique scheme: stability and convergence are established. We represent dendrograms as ultrametric spaces and use tools from metric geometry, namely the Gromov-Hausdorff distance, to quantify the degree to which perturbations in the input metric space affect the result of hierarchical methods."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | The work of Ackerman and Ben-David @cite_50 studied clustering quality measures rather than the clustering functions, which provides a richer setting in which an alternative to Kleinberg's axioms can be consistently stated. | {
"cite_N": [
"@cite_50"
],
"mid": [
"2096879261"
],
"abstract": [
"Aiming towards the development of a general clustering theory, we discuss abstract axiomatization for clustering. In this respect, we follow up on the work of Kleinberg, ([1]) that showed an impossibility result for such axiomatization. We argue that an impossibility result is not an inherent feature of clustering, but rather, to a large extent, it is an artifact of the specific formalism used in [1]. As opposed to previous work focusing on clustering functions, we propose to address clustering quality measures as the object to be axiomatized. We show that principles like those formulated in Kleinberg's axioms can be readily expressed in the latter framework without leading to inconsistency. A clustering-quality measure (CQM) is a function that, given a data set and its partition into clusters, returns a non-negative real number representing how strong or conclusive the clustering is. We analyze what clustering-quality measures should look like and introduce a set of requirements (axioms) for such measures. Our axioms capture the principles expressed by Kleinberg's axioms while retaining consistency. We propose several natural clustering quality measures, all satisfying the proposed axioms. In addition, we analyze the computational complexity of evaluating the quality of a given clustering and show that, for the proposed CQMs, it can be computed in polynomial time."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | The most comprehensive theoretical study of clustering stability by Ben-David and von Luxburg @cite_43 defined a notion of clustering stability and related it to properties of the decision boundaries of the algorithm. This is the starting point of the theoretical part of this work. We extend these notions to account for the considerably more complex Mapper construction. | {
"cite_N": [
"@cite_43"
],
"mid": [
"1516452477"
],
"abstract": [
"In this paper, we investigate stability-based methods for cluster model selection, in particular to select the number K of clusters. The scenario under consideration is that clustering is performed by minimizing a certain clustering quality function, and that a unique global minimizer exists. On the one hand we show that stability can be upper bounded by certain properties of the optimal clustering, namely by the mass in a small tube around the cluster boundaries. On the other hand, we provide counterexamples which show that a reverse statement is not true in general. Finally, we give some examples and arguments why, from a theoretic point of view, using clustering stability in a high sample setting can be problematic. It can be seen that distribution-free guarantees bounding the difference between the finite sample stability and the “true stability” cannot exist, unless one makes strong assumptions on the underlying distribution."
]
} |
1906.01507 | 2948336192 | Mapper is an unsupervised machine learning algorithm generalising the notion of clustering to obtain a geometric description of a dataset. The procedure splits the data into possibly overlapping bins which are then clustered. The output of the algorithm is a graph where nodes represent clusters and edges represent the sharing of data points between two clusters. However, several parameters must be selected before applying Mapper and the resulting graph may vary dramatically with the choice of parameters. We define an intrinsic notion of Mapper instability that measures the variability of the output as a function of the choice of parameters required to construct a Mapper output. Our results and discussion are general and apply to all Mapper-type algorithms. We derive theoretical results that provide estimates for the instability and suggest practical ways to control it. We provide also experiments to illustrate our results and in particular we demonstrate that a reliable candidate Mapper output can be identified as a local minimum of instability regarded as a function of Mapper input parameters. | This paper is organised as follows. In , we discuss some related work and its connections to the current paper. In , we give background on clustering stability required for the remainder of the paper. This allows us in to set out how the ideas of Ben-David and von Luxburg @cite_43 can be generalised to the Mapper setting. In particular, we introduce Mapper functions in Definition , which provide a new way of expressing Mapper outputs. Crucially, this is used to define a similarity metric between Mapper functions, @math in Definition . The Distance @math captures the structure of the whole Mapper output and leads to the definition of our notion of instability of Mapper (Definition ) with respect to a large class of clustering procedures. In we present an algorithm allowing us to experimentally obtain values of instability. This leads in section in to interesting experimental results which suggest that regions of relatively high instability correspond to structural changes in the Mapper output. Hence local minima of the instability function with respect to parameter choices are good candidates for parameter selection allowing us to study Mapper through variations of all the parameters. | {
"cite_N": [
"@cite_43"
],
"mid": [
"1516452477"
],
"abstract": [
"In this paper, we investigate stability-based methods for cluster model selection, in particular to select the number K of clusters. The scenario under consideration is that clustering is performed by minimizing a certain clustering quality function, and that a unique global minimizer exists. On the one hand we show that stability can be upper bounded by certain properties of the optimal clustering, namely by the mass in a small tube around the cluster boundaries. On the other hand, we provide counterexamples which show that a reverse statement is not true in general. Finally, we give some examples and arguments why, from a theoretic point of view, using clustering stability in a high sample setting can be problematic. It can be seen that distribution-free guarantees bounding the difference between the finite sample stability and the “true stability” cannot exist, unless one makes strong assumptions on the underlying distribution."
]
} |
1906.01452 | 2948749820 | In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence description based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently. | In this section, we first introduce two types of video captioning: template-based approaches @cite_10 @cite_13 @cite_24 @cite_9 @cite_32 and sequence learning approaches @cite_38 @cite_35 @cite_21 @cite_42 @cite_46 @cite_41 @cite_7 @cite_60 @cite_53 @cite_29 @cite_27 @cite_59 , and then introduce the application of dual learning. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_7",
"@cite_60",
"@cite_41",
"@cite_10",
"@cite_9",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_53",
"@cite_29",
"@cite_24",
"@cite_27",
"@cite_59",
"@cite_46",
"@cite_13"
],
"mid": [
"1586939924",
"2139501017",
"2608022654",
"1573040851",
"2527349934",
"1601567445",
"1596841185",
"2964241990",
"2951183276",
"877909479",
"2951159095",
"2607119937",
"2110933980",
"2604141702",
"2962799512",
"2523993696",
"2142900973"
],
"abstract": [
"Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description model. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.",
"Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be senstive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"In this paper, we propose a novel two-stream framework based on combinational deep neural networks. The framework is mainly composed of two components: one is a parallel two-stream encoding component which learns video encoding from multiple sources using 3D convolutional neural networks and the other is a long-short-term-memory (LSTM)-based decoding language model which transfers the input encoded video representations to text descriptions. The merits of our proposed model are: 1) It extracts both temporal and spatial features by exploring the usage of 3D convolutional networks on both raw RGB frames and motion history images. 2) Our model can dynamically tune the weights of different feature channels since the network is trained end-to-end from learning combinational encoding of multiple features to LSTM-based language model. Our model is evaluated on three public video description datasets: one YouTube clips dataset (Microsoft Video Description Corpus) and two large movie description datasets (MPII Corpus and Montreal Video Annotation Dataset) and achieves comparable or better performance than the state-of-the-art approaches in video caption generation.",
"Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3 and 31.0 in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets.",
"Describing videos with natural language is one of the ultimate goals of video understanding. Video records multi-modal information including image, motion, aural, speech and so on. MSR Video to Language Challenge provides a good chance to study multi-modality fusion in caption task. In this paper, we propose the multi-modal fusion encoder and integrate it with text sequence decoder into an end-to-end video caption framework. Features from visual, aural, speech and meta modalities are fused together to represent the video contents. Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs) are then used as the decoder to generate natural language sentences. Experimental results show the effectiveness of multi-modal fusion encoder trained in the end-to-end framework, which achieved top performance in both common metrics evaluation and human evaluation.",
"We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments.",
"Humans can easily describe what they see in a coherent way and at varying level of detail. However, existing approaches for automatic video description focus on generating only single sentences and are not able to vary the descriptions’ level of detail. In this paper, we address both of these limitations: for a variable level of detail we produce coherent multi-sentence descriptions of complex videos. To understand the difference between detailed and short descriptions, we collect and analyze a video description corpus of three levels of detail. We follow a two-step approach where we first learn to predict a semantic representation (SR) from video and then generate natural language descriptions from it. For our multi-sentence descriptions we model across-sentence consistency at the level of the SR by enforcing a consistent topic. Human judges rate our descriptions as more readable, correct, and relevant than related work.",
"Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.",
"Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD.",
"This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.",
"Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.",
"",
"Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100 (Meteor score increases from 4.82 to 9.65).",
"Real-world web videos often contain cues to supplement visual information for generating natural language descriptions. In this paper we propose a sequence-to-sequence model which explores such auxiliary information. In particular, audio and the topic of the video are used in addition to the visual information in a multimodal framework to generate coherent descriptions of videos \"in the wild\". In contrast to current encoder-decoder based models which exploit visual information only during the encoding stage, our model fuses multiple sources of information judiciously, showing improvement over using the different modalities separately. We based our multimodal video description network on the state-of-the-art sequence to sequence video to text (S2VT) model and extended it to take advantage of multiple modalities. Extensive experiments on the challenging MSR-VTT dataset are carried out to show the superior performance of the proposed approach on natural videos found in the web.",
"Despite a recent push towards large-scale object recognition, activity recognition remains limited to narrow domains and small vocabularies of actions. In this paper, we tackle the challenge of recognizing and describing activities in-the-wild''. We present a solution that takes a short video clip and outputs a brief sentence that sums up the main activity in the video, such as the actor, the action and its object. Unlike previous work, our approach works on out-of-domain actions: it does not require training videos of the exact activity. If it cannot find an accurate prediction for a pre-trained model, it finds a less specific answer that is also plausible from a pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose an appropriate level of generalization, and priors learned from Web-scale natural language corpora to penalize unlikely combinations of actors actions objects, we also use a Web-scale language model to fill in'' novel verbs, i.e. when the verb does not appear in the training set. We evaluate our method on a large YouTube corpus and demonstrate it is able to generate short sentence descriptions of video clips better than baseline approaches."
]
} |
1906.01452 | 2948749820 | In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence description based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently. | Template-based methods first define some specific rules for language grammar, and then parse the sentence into several components such as subject, verb, and object. The obtained sentence fragments are associated with words detected from the visual content to produce the final description about an input video with predefined templates. For example, a concept hierarchy of actions was introduced to describe human activities in @cite_10 , while a semantic hierarchy was defined in @cite_13 to learn the semantic relationship between different sentence fragments. In @cite_24 , the conditional random field (CRF) was adopted to model the connections between objects and activities of the visual input and generate the semantic features for description. Besides, Xu proposed a unified framework consisting of a semantic language model, a deep video model, and a joint embedding model to learn the association between videos and natural sentences @cite_32 . However, as stated in @cite_53 , the aforementioned approaches highly depend on predefined templates and are thus limited by the fixed syntactical structure, which is inflexible for sentence generation. | {
"cite_N": [
"@cite_10",
"@cite_53",
"@cite_32",
"@cite_24",
"@cite_13"
],
"mid": [
"1601567445",
"2951159095",
"877909479",
"2110933980",
"2142900973"
],
"abstract": [
"We propose a method for describing human activities from video images based on concept hierarchies of actions. Major difficulty in transforming video images into textual descriptions is how to bridge a semantic gap between them, which is also known as inverse Hollywood problem. In general, the concepts of events or actions of human can be classified by semantic primitives. By associating these concepts with the semantic features extracted from video images, appropriate syntactic components such as verbs, objects, etc. are determined and then translated into natural language sentences. We also demonstrate the performance of the proposed method by several experiments.",
"Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and or 3-D Convolutional Neural Networks (CNN) to encode video content and Recurrent Neural Networks (RNN) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)---a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8 and 74.0 in terms of BLEU@4 and CIDEr-D. Superior results when compared to state-of-the-art methods are also reported on M-VAD and MPII-MD.",
"Recently, joint video-language modeling has been attracting more and more attention. However, most existing approaches focus on exploring the language model upon on a fixed visual model. In this paper, we propose a unified framework that jointly models video and the corresponding text sentences. The framework consists of three parts: a compositional semantics language model, a deep video model and a joint embedding model. In our language model, we propose a dependency-tree structure model that embeds sentence into a continuous vector space, which preserves visually grounded meanings and word order. In the visual model, we leverage deep neural networks to capture essential semantic information from videos. In the joint embedding model, we minimize the distance of the outputs of the deep video model and compositional language model in the joint space, and update these two models jointly. Based on these three parts, our system is able to accomplish three tasks: 1) natural language generation, and 2) video retrieval and 3) language retrieval. In the experiments, the results show our approach outperforms SVM, CRF and CCA baselines in predicting Subject-Verb-Object triplet and natural sentence generation, and is better than CCA in video retrieval and language retrieval tasks.",
"Humans use rich natural language to describe and communicate visual perceptions. In order to provide natural language descriptions for visual content, this paper combines two important ingredients. First, we generate a rich semantic representation of the visual content including e.g. object and activity labels. To predict the semantic representation we learn a CRF to model the relationships between different components of the visual input. And second, we propose to formulate the generation of natural language as a machine translation problem using the semantic representation as source language and the generated sentences as target language. For this we exploit the power of a parallel corpus of videos and textual descriptions and adapt statistical machine translation to translate between our two languages. We evaluate our video descriptions on the TACoS dataset, which contains video snippets aligned with sentence descriptions. Using automatic evaluation and human judgments we show significant improvements over several baseline approaches, motivated by prior work. Our translation approach also shows improvements over related work on an image description task.",
"Despite a recent push towards large-scale object recognition, activity recognition remains limited to narrow domains and small vocabularies of actions. In this paper, we tackle the challenge of recognizing and describing activities in-the-wild''. We present a solution that takes a short video clip and outputs a brief sentence that sums up the main activity in the video, such as the actor, the action and its object. Unlike previous work, our approach works on out-of-domain actions: it does not require training videos of the exact activity. If it cannot find an accurate prediction for a pre-trained model, it finds a less specific answer that is also plausible from a pragmatic standpoint. We use semantic hierarchies learned from the data to help to choose an appropriate level of generalization, and priors learned from Web-scale natural language corpora to penalize unlikely combinations of actors actions objects, we also use a Web-scale language model to fill in'' novel verbs, i.e. when the verb does not appear in the training set. We evaluate our method on a large YouTube corpus and demonstrate it is able to generate short sentence descriptions of video clips better than baseline approaches."
]
} |
1906.01452 | 2948749820 | In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence description based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently. | More recently, reinforcement learning has shown benefits on video captioning tasks. Pasunuru and Bansal employed reinforcement learning to directly optimize the CIDEt scores (a variant metric of CIDEr) and achieved state-of-the-art results on the MSR-VTT dataset @cite_61 . Wang proposed a hierarchical reinforcement learning framework, where a manager guides a worker to generate semantic segments about activities to produce more detailed descriptions. | {
"cite_N": [
"@cite_61"
],
"mid": [
"2742943414"
],
"abstract": [
"Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset."
]
} |
1906.01452 | 2948749820 | In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence description based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently. | As far as we know, dual learning mechanism has not been employed in video captioning but widely used in NMT @cite_4 @cite_58 @cite_1 . @cite_4 , the source sentences are reproduced from the target side hidden states, and the accuracy of reconstructed source provides a constraint for the decoder to embed more information of source language into target language. @cite_58 , the dual learning is employed to train model of inter-translation of English-French, and obtains significant improvement on tasks of English to French and French to English. | {
"cite_N": [
"@cite_1",
"@cite_58",
"@cite_4"
],
"mid": [
"2733239165",
"2546938941",
"2963551569"
],
"abstract": [
"Many supervised learning tasks are emerged in dual forms, e.g., English-to-French translation vs. French-to-English translation, speech recognition vs. text to speech, and image classification vs. image generation. Two dual tasks have intrinsic connections with each other due to the probabilistic correlation between their models. This connection is, however, not effectively utilized today, since people usually train the models of two dual tasks separately and independently. In this work, we propose training the models of two dual tasks simultaneously, and explicitly exploiting the probabilistic correlation between them to regularize the training process. For ease of reference, we call the proposed approach . We demonstrate that dual supervised learning can improve the practical performances of both tasks, for various applications including machine translation, image processing, and sentiment analysis.",
"While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is inspired by the following observation: any machine translation task has a dual task, e.g., English-to-French translation (primal) versus French-to-English translation (dual); the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the dual-learning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process (e.g., the language-model likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations), we can iteratively update the two models until convergence (e.g., using the policy gradient methods). We call the corresponding approach to neural machine translation dual-NMT. Experiments show that dual-NMT works very well on English ↔ French translation; especially, by learning from monolingual data (with 10 bilingual data for warm start), it achieves a comparable accuracy to NMT trained from the full bilingual data for the French-to-English translation task.",
""
]
} |
1906.01354 | 2948147533 | We provide a general framework for characterizing the trade-off between accuracy and robustness in supervised learning. We propose a method and define quantities to characterize the trade-off between accuracy and robustness for a given architecture, and provide theoretical insight into the trade-off. Specifically we introduce a simple trade-off curve, define and study an influence function that captures the sensitivity, under adversarial attack, of the optima of a given loss function. We further show how adversarial training regularizes the parameters in an over-parameterized linear model, recovering the LASSO and ridge regression as special cases, which also allows us to theoretically analyze the behavior of the trade-off curve. In experiments, we demonstrate the corresponding trade-off curves of neural networks and how they vary with respect to factors such as number of layers, neurons, and across different network structures. Such information provides a useful guideline to architecture selection. | There has been some other work on exploring the trade-off between accuracy and robustness, which bring us great insights. @cite_22 provides the insight that accuracy may be at odds with robustness by experiments and use a simple example as a proof of concept to analyze theoretically. In @cite_5 , the authors thoroughly benchmark 18 ImageNet models using multiple robustness metrics. They mainly focus on experimentally studying the trade-off. A closely related work by @cite_0 identify a trade-off between robustness and accuracy, but their aim is mainly to use the trade-off to guide principle in the design of defenses against adversarial examples for a fixed architecture while the trade-off we identify is mainly for characterizing different architectures. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_22"
],
"mid": [
"2913266441",
"2887603965",
"2964116600"
],
"abstract": [
"We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory underlying this trade-off. In this work, we decompose the prediction error for adversarial examples (robust error) as the sum of the natural (classification) error and boundary error, and provide a differentiable upper bound using the theory of classification-calibrated loss, which is shown to be the tightest possible upper bound uniform over all probability distributions and measurable predictors. Inspired by our theoretical analysis, we also design a new defense method, TRADES, to trade adversarial robustness off against accuracy. Our proposed algorithm performs well experimentally in real-world datasets. The methodology is the foundation of our entry to the NeurIPS 2018 Adversarial Vision Challenge in which we won the 1st place out of 2,000 submissions, surpassing the runner-up approach by @math in terms of mean @math perturbation distance.",
"The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition. However, recent studies have highlighted the lack of robustness in well-trained deep neural networks to adversarial examples. Visually imperceptible perturbations to natural images can easily be crafted and mislead the image classifiers towards misclassification. To demystify the trade-offs between robustness and accuracy, in this paper we thoroughly benchmark 18 ImageNet models using multiple robustness metrics, including the distortion, success rate and transferability of adversarial examples between 306 pairs of models. Our extensive experimental results reveal several new insights: (1) linear scaling law - the empirical ( _2 ) and ( _ ) distortion metrics scale linearly with the logarithm of classification error; (2) model architecture is a more critical factor to robustness than model size, and the disclosed accuracy-robustness Pareto frontier can be used as an evaluation criterion for ImageNet model designers; (3) for a similar network architecture, increasing network depth slightly improves robustness in ( _ ) distortion; (4) there exist models (in VGG family) that exhibit high adversarial transferability, while most adversarial examples crafted from one model can only be transferred within the same family. Experiment code is publicly available at https: github.com huanzhang12 Adversarial_Survey.",
""
]
} |
1906.01419 | 2948320427 | The validation of design pattern implementations to identify pattern violations has gained more relevance as part of re-engineering processes in order to preserve, extend, reuse software projects in rapid development environments. If design pattern implementations do not conform to their definitions, they are considered a violation. Software aging and the lack of experience of developers are the origins of design pattern violations. It is important to check the correctness of the design pattern implementations against some predefined characteristics to detect and to correct violations, thus, to reduce costs. Currently, several tools have been developed to detect design pattern instances, but there has been little work done in creating an automated tool to identify and validate design pattern violations. In this paper we propose a Design Pattern Violations Identification and Assessment (DPVIA) tool, which has the ability to identify software design pattern violations and report the conformance score of pattern instance implementations towards a set of predefined characteristics for any design pattern definition whether Gang of Four (GoF) design patterns by [1]; or custom pattern by software developer. Moreover, we have verified the validity of the proposed tool using two evaluation experiments and the results were manually checked. Finally, in order to assess the functionality of the proposed tool, it is evaluated with a data-set containing 5,679,964 Lines of Code among 28,669 in 15 open-source projects, with a large and small size of open-source projects that extensively and systematically employing design patterns, to determine design pattern violations and suggest refactoring solutions, thus keeping costs of software evolution. The results can be used by software architects to develop best practices while using design patterns. | As the focus of this work lies on detect design pattern violations and their evaluation, we reviewed the early work of Izurieta and Bieman @cite_19 on type of design pattern violations called decay. Decay can involve the design patterns used to structure a system where classes that participate in design pattern realizations accumulate non pattern related code. Izurieta and Bieman investigated the evolution of design pattern implementations to comprehend how patterns decay and examined the extent to which software designs actually decay by studying the aging of design patterns in three successful object-oriented systems that include the entire code base of JRefactory, and added two additional open source systems ArgoUML and eXist. The results indicate that pattern grime (non-pattern-related code) that builds up around design patterns is mostly due to increases in coupling and it is the main factor for the decay of software design patterns. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2135036872"
],
"abstract": [
"A common belief is that software designs decay as systems evolve. This research examines the extent to which software designs actually decay by studying the aging of design patterns in successful object oriented systems. Aging of design patterns is measured using various types of decay indices developed for this research. Decay indices track the internal structural changes of a design pattern realization and the code that surrounds the realization. Hypotheses for each kind of decay are tested. We found that the original design pattern functionality remains, and pattern decay is due to the \"grime \", non-pattern code, that grows around the pattern realization."
]
} |
1906.01419 | 2948320427 | The validation of design pattern implementations to identify pattern violations has gained more relevance as part of re-engineering processes in order to preserve, extend, reuse software projects in rapid development environments. If design pattern implementations do not conform to their definitions, they are considered a violation. Software aging and the lack of experience of developers are the origins of design pattern violations. It is important to check the correctness of the design pattern implementations against some predefined characteristics to detect and to correct violations, thus, to reduce costs. Currently, several tools have been developed to detect design pattern instances, but there has been little work done in creating an automated tool to identify and validate design pattern violations. In this paper we propose a Design Pattern Violations Identification and Assessment (DPVIA) tool, which has the ability to identify software design pattern violations and report the conformance score of pattern instance implementations towards a set of predefined characteristics for any design pattern definition whether Gang of Four (GoF) design patterns by [1]; or custom pattern by software developer. Moreover, we have verified the validity of the proposed tool using two evaluation experiments and the results were manually checked. Finally, in order to assess the functionality of the proposed tool, it is evaluated with a data-set containing 5,679,964 Lines of Code among 28,669 in 15 open-source projects, with a large and small size of open-source projects that extensively and systematically employing design patterns, to determine design pattern violations and suggest refactoring solutions, thus keeping costs of software evolution. The results can be used by software architects to develop best practices while using design patterns. | Furthermore, Naouel @cite_3 defined a taxonomy of potential design pattern defects and conducted an empirical study to investigate their existence. The authors defined design pattern defects as errors occurring in the design of the software that come from the absence or the bad use of design patterns. The taxonomy includes the following four types of defects: is a design pattern that has not been well conforming with GoF @cite_7 definition but that is not erroneous. is a distorted form of a design motif which is harmful for the quality of the code. is when a design is missing a needed design pattern. According to GoF @cite_7 , missing patterns generates poor design. is the over use of design patterns in a software design. Later on, Izurieta cooperated with other researchers to obtain better comprehensions of patterns decay. Afterwards, Dale and Izurieta @cite_1 proposed study on impacts of design patterns decay on quality of project. | {
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_3"
],
"mid": [
"1975335185",
"1649645444",
""
],
"abstract": [
"Context Software systems need to be of high enough quality to enable growth and stability. Goal The purpose of this research is to study the effects of code changes that violate a design pattern's intended role on the quality of a project. Method To investigate this problem, we have developed a grime injector to model grime growth, a form of design pattern decay, on Java projects. We use SonarQube's technical debt software to compare the technical debt scores of six different types of modular grime. These six types can be classified along three major dimensions: strength, scope, and direction. Results We find that the strength dimension is the most important contributor to the quality of a design and that temporary grime results in higher technical debt scores than persistent grime. Conclusion This knowledge helps with design decisions that help manage a project's technical debt.",
"The book is an introduction to the idea of design patterns in software engineering, and a catalog of twenty-three common patterns. The nice thing is, most experienced OOP designers will find out they've known about patterns all along. It's just that they've never considered them as such, or tried to centralize the idea behind a given pattern so that it will be easily reusable.",
""
]
} |
1906.01399 | 2949768491 | For human pose estimation in still images, this paper proposes three semi- and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi- and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi- and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets. | A number of methods for human pose estimation employed (1) deformable part models (e.g., pictorial structure models @cite_48 ) for globally-optimizing an articulated human body and (2) discriminative learning for optimizing the parameters of the models @cite_68 . In general, part connectivity in a deformable part model is defined by image-independent quadratic functions for efficient optimization via distance transform. Image-dependent functions (e.g., @cite_60 @cite_46 ) disable distance transform but improve pose estimation accuracy. In @cite_11 , on the other hand, image-dependent but quadratic functions enable distance transform for representing the relative positions between neighboring parts. | {
"cite_N": [
"@cite_60",
"@cite_48",
"@cite_46",
"@cite_68",
"@cite_11"
],
"mid": [
"1540144755",
"2030536784",
"2068533011",
"2168356304",
"2155394491"
],
"abstract": [
"We address the problem of articulated human pose estimation by learning a coarse-to-fine cascade of pictorial structure models. While the fine-level state-space of poses of individual parts is too large to permit the use of rich appearance models, most possibilities can be ruled out by efficient structured models at a coarser scale. We propose to learn a sequence of structured models at different pose resolutions, where coarse models filter the pose space for the next level via their max-marginals. The cascade is trained to prune as much as possible while preserving true poses for the final level pictorial structure model. The final level uses much more expensive segmentation, contour and shape features in the model for the remaining filtered set of candidates. We evaluate our framework on the challenging Buffy and PASCAL human pose datasets, improving the state-of-the-art.",
"In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.",
"This paper proposes contour-based features for articulated pose estimation. Most of recent methods are designed using tree-structured models with appearance evaluation only within the region of each part. While these models allow us to speed up global optimization in localizing the whole parts, useful appearance cues between neighboring parts are missing. Our work focuses on how to evaluate parts connectivity using contour cues. Unlike previous works, we locally evaluate parts connectivity only along the orientation between neighboring parts within where they overlap. This adaptive localization of the features is required for suppressing bad effects due to nuisance edges such as those of background clutter and clothing textures, as well as for reducing computational cost. Discriminative training of the contour features improves estimation accuracy more. Experimental results verify the effectiveness of our contour-based features.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training."
]
} |
1906.01399 | 2949768491 | For human pose estimation in still images, this paper proposes three semi- and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi- and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi- and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets. | While aforementioned advances improve pose estimation demonstrably, all of them require human pose annotations (i.e., skeletons annotated on an image) for supervised learning. Complexity in time-consuming pose annotation work leads to annotation errors by crowd sourcing, as described in @cite_40 . For reducing the time-consuming annotations in supervised learning, semi- and weakly-supervised learning are widely used. | {
"cite_N": [
"@cite_40"
],
"mid": [
"2103015390"
],
"abstract": [
"The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Recent work has shown state-of-the-art results by partitioning the pose space and using strong nonlinear classifiers such that the pose dependence and multi-modal nature of body part appearance can be captured. We propose to extend these methods to handle much larger quantities of training data, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a significant increase in pose estimation accuracy, while simultaneously reducing computational expense by a factor of 10, and contribute a dataset of 10,000 highly articulated poses."
]
} |
1906.01399 | 2949768491 | For human pose estimation in still images, this paper proposes three semi- and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi- and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi- and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets. | Semi-supervised learning allows us to utilize a huge number of non-annotated images for various recognition problems (e.g., human action recognition @cite_36 , human re-identification @cite_10 , and face and gait recognition @cite_21 ). In general, semi-supervised learning annotates the images automatically by employing several cues in with the images; for example, temporal consistency in tracking @cite_43 , clustering @cite_4 , multimodal keywords @cite_59 , and domain adaptation @cite_20 . | {
"cite_N": [
"@cite_4",
"@cite_36",
"@cite_21",
"@cite_43",
"@cite_59",
"@cite_10",
"@cite_20"
],
"mid": [
"2068935349",
"2018870022",
"2125095336",
"2171640106",
"1981613567",
"1999478721",
"2141129816"
],
"abstract": [
"We present an image set classification algorithm based on unsupervised clustering of labeled training and unlabeled test data where labels are only used in the stopping criterion. The probability distribution of each class over the set of clusters is used to define a true set based similarity measure. To this end, we propose an iterative sparse spectral clustering algorithm. In each iteration, a proximity matrix is efficiently recomputed to better represent the local subspace structure. Initial clusters capture the global data structure and finer clusters at the later stages capture the subtle class differences not visible at the global scale. Image sets are compactly represented with multiple Grassmannian manifolds which are subsequently embedded in Euclidean space with the proposed spectral clustering algorithm. We also propose an efficient eigenvector solver which not only reduces the computational cost of spectral clustering by many folds but also improves the clustering quality and final classification results. Experiments on five standard datasets and comparison with seven existing techniques show the efficacy of our algorithm.",
"Graph-based methods are a useful class of methods for improving the performance of unsupervised and semi-supervised machine learning tasks, such as clustering or information retrieval. However, the performance of existing graph-based methods is highly dependent on how well the affinity graph reflects the original data structure. We propose that multimedia such as images or videos consist of multiple separate components, and therefore more than one graph is required to fully capture the relationship between them. Accordingly, we present a new spectral method - the Feature Grouped Spectral Multigraph (FGSM) - which comprises the following steps. First, mutually independent subsets of the original feature space are generated through feature clustering. Secondly, a separate graph is generated from each feature subset. Finally, a spectral embedding is calculated on each graph, and the embeddings are scaled aggregated into a single representation. Using this representation, a variety of experiments are performed on three learning tasks - clustering, retrieval and recognition - on human action datasets, demonstrating considerably better performance than the state-of-the-art.",
"We propose a new semisupervised learning algorithm, referred to as patch distribution compatible semisupervised dimension reduction, for face and human gait recognition. Each image (a face image or an average human silhouette image) is first represented as a set of local patch features and it is further characterized as the corresponding patch distribution feature, which can be expressed as an image-specific Gaussian mixture model (GMM) adapted from the universal background model. Assuming that the individual components of the image-specific GMMs from all the training images reside on a submanifold, we assign a component-level prediction label matrix to each individual GMM component and introduce a new regularizer based on a set of local submanifold smoothness assumptions in our objective function. We also constrain each component-level prediction label matrix to be consistent with the image-level prediction label matrix , as well as enforce to be close to the given labels for the labeled samples. We further use a linear regression function to provide embeddings for the training data and the unseen test data. Inspired by the recent work flexible manifold embedding, we additionally integrate the regression residue in our objective function to measure the mismatch between and , such that our method can better cope with the data sampled from a nonlinear manifold. Finally, the optimal solutions of the component-level prediction label matrix , the image-level prediction label matrix , the projection matrix , and the bias term b can be simultaneously obtained. Comprehensive experiments on three benchmark face databases CMU PIE, FERET, and AR as well as the USF HumanID gait database clearly demonstrate the effectiveness of our algorithm over other state-of-the-art semisupervised dimension reduction methods.",
"Most feature selection methods for object tracking assume that the labeled samples obtained in the next frames follow the similar distribution with the samples in the previous frame. However, this assumption is not true in some scenarios. As a result, the selected features are not suitable for tracking and the “drift” problem happens. In this paper, we consider data's distribution in tracking from a new perspective. We classify the samples into three categories: auxiliary samples (samples in the previous frames), target samples (collected in the current frame) and unlabeled samples (obtained in the next frame). To make the best use of them for tracking, we propose a novel semi-supervised transfer learning approach. Specifically, we assume only target samples follow the same distribution as the unlabeled samples and develop a novel semi-supervised CovBoost method. It could utilize auxiliary samples and unlabeled samples effectively when training the best strong classifier for tracking. Furthermore, we develop a new online updating algorithm for semi-supervised CovBoost, making our tracker handle with significant variations of the tracked target and background successfully. We demonstrate the excellent performance of the proposed tracker on several challenging test videos.",
"In image categorization the goal is to decide if an image belongs to a certain category or not. A binary classifier can be learned from manually labeled images; while using more labeled examples improves performance, obtaining the image labels is a time consuming process. We are interested in how other sources of information can aid the learning process given a fixed amount of labeled images. In particular, we consider a scenario where keywords are associated with the training images, e.g. as found on photo sharing websites. The goal is to learn a classifier for images alone, but we will use the keywords associated with labeled and unlabeled images to improve the classifier using semi-supervised learning. We first learn a strong Multiple Kernel Learning (MKL) classifier using both the image content and keywords, and use it to score unlabeled images. We then learn classifiers on visual features only, either support vector machines (SVM) or least-squares regression (LSR), from the MKL output values on both the labeled and unlabeled images. In our experiments on 20 classes from the PASCAL VOC'07 set and 38 from the MIR Flickr set, we demonstrate the benefit of our semi-supervised approach over only using the labeled images. We also present results for a scenario where we do not use any manual labeling but directly learn classifiers from the image tags. The semi-supervised approach also improves classification accuracy in this case.",
"The desirability of being able to search for specific persons in surveillance videos captured by different cameras has increasingly motivated interest in the problem of person re-identification, which is a critical yet under-addressed challenge in multi-camera tracking systems. The main difficulty of person re-identification arises from the variations in human appearances from different camera views. In this paper, to bridge the human appearance variations across cameras, two coupled dictionaries that relate to the gallery and probe cameras are jointly learned in the training phase from both labeled and unlabeled images. The labeled training images carry the relationship between features from different cameras, and the abundant unlabeled training images are introduced to exploit the geometry of the marginal distribution for obtaining robust sparse representation. In the testing phase, the feature of each target image from the probe camera is first encoded by the sparse representation and then recovered in the feature space spanned by the images from the gallery camera. The features of the same person from different cameras are similar following the above transformation. Experimental results on publicly available datasets demonstrate the superiority of our method.",
"Many classifiers are trained with massive training sets only to be applied at test time on data from a different distribution. How can we rapidly and simply adapt a classifier to a new test distribution, even when we do not have access to the original training data? We present an on-line approach for rapidly adapting a “black box” classifier to a new test data set without retraining the classifier or examining the original optimization criterion. Assuming the original classifier outputs a continuous number for which a threshold gives the class, we reclassify points near the original boundary using a Gaussian process regression scheme. We show how this general procedure can be used in the context of a classifier cascade, demonstrating performance that far exceeds state-of-the-art results in face detection on a standard data set. We also draw connections to work in semi-supervised learning, domain adaptation, and information regularization."
]
} |
1906.01399 | 2949768491 | For human pose estimation in still images, this paper proposes three semi- and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi- and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi- and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets. | For human pose estimation also, several semi-supervised learning methods have been proposed. However, these methods are designed for limited simpler problems. For example, in @cite_19 @cite_32 , 3D pose models representing a limited variation of human pose sequences (e.g., only walking sequences) are trained by semi-supervised learning; in @cite_19 and @cite_32 , GMM-based clustering and manifold regularization are employed for learning unlabeled data, respectively. For semi-supervised learning, not only a small number of annotated images but also a huge amount of synthetic images (e.g., CG images with automatic pose annotations) are also useful with transductive learning @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_32"
],
"mid": [
"2110619642",
"2143002705",
"2134717813"
],
"abstract": [
"This paper presents the first semi-supervised transductive algorithm for real-time articulated hand pose estimation. Noisy data and occlusions are the major challenges of articulated hand pose estimation. In addition, the discrepancies among realistic and synthetic pose data undermine the performances of existing approaches that use synthetic data extensively in training. We therefore propose the Semi-supervised Transductive Regression (STR) forest which learns the relationship between a small, sparsely labelled realistic dataset and a large synthetic dataset. We also design a novel data-driven, pseudo-kinematic technique to refine noisy or occluded joints. Our contributions include: (i) capturing the benefits of both realistic and synthetic data via transductive learning, (ii) showing accuracies can be improved by considering unlabelled data, and (iii) introducing a pseudo-kinematic technique to refine articulations efficiently. Experimental results show not only the promising performance of our method with respect to noise and occlusions, but also its superiority over state-of-the-arts in accuracy, robustness and speed.",
"Learning regression models (for example for body pose estimation, or BPE) currently requires large numbers of training examples—pairs of the form (image, pose parameters). These examples are difficult to obtain for many problems, demanding considerable effort in manual labelling. However it is easy to obtain unlabelled examples—in BPE, simply by collecting many images, and by sampling many poses using motion capture. We show how the use of unlabelled examples can improve the performance of such estimators, making better use of the difficult-to-obtain training examples. Because the distribution of parameters conditioned on a given image is often multimodal, conventional regression models must be extended to allow for multiple modes. Such extensions have to date had a pre-set number of modes, independent of the contents of the input image, and amount to fitting several regressors simultaneously. Our framework models instead the joint distribution of images and poses, so the conditional estimates are inherently multimodal, and the number of modes is a function of the joint-space complexity, rather than of the maximum number of output modes. We demonstrate the improvements obtainable by using unlabelled samples on synthetic examples and on a real pose estimation problem, and demonstrate in both cases the additional accuracy provided by the use of unlabelled data.",
"Recent research in visual inference from monocular images has shown that discriminatively trained image-based predictors can provide fast, automatic qualitative 3D reconstructions of human body pose or scene structure in real-world environments. However, the stability of existing image representations tends to be perturbed by deformations and misalignments in the training set, which, in turn, degrade the quality of learning and generalization. In this paper we advocate the semi-supervised learning of hierarchical image descriptions in order to better tolerate variability at multiple levels of detail. We combine multilevel encodings with improved stability to geometric transformations, with metric learning and semi-supervised manifold regularization methods in order to further profile them for task-invariance -resistance to background clutter and within the same human pose class differences. We quantitatively analyze the effectiveness of both descriptors and learning methods and show that each one can contribute, sometimes substantially, to more reliable 3D human pose estimates in cluttered images."
]
} |
1906.01399 | 2949768491 | For human pose estimation in still images, this paper proposes three semi- and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi- and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi- and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets. | In weakly-supervised learning, only part of full annotations are given manually. In particular, annotations that can be easily annotated are given. For human activities, full annotations may include the pose, region, and attributes (e.g., ID, action class) of each person. Since it is easy to provide the attributes rather than the pose and region, such attributes are often given as weak annotations. For example, only an action label is given to each training sequence where the regions of a person (i.e. windows enclosing a human body) in frames are found automatically in @cite_56 . Instead of the manually-given action label, scripts are employed as weak annotations in order to find correct action labels of several clips in video sequences in @cite_70 ; action clips are temporally localized. Not only in videos but also in still images, weak annotations can provide highly-contextual information. In @cite_33 , given an action label, a human window is spatially localized with an object used for this action. For human pose estimation, Boolean geometric relationships between body parts are used as weak labels in @cite_39 . | {
"cite_N": [
"@cite_70",
"@cite_39",
"@cite_33",
"@cite_56"
],
"mid": [
"2535977253",
"1983154976",
"2129947832",
"76527192"
],
"abstract": [
"This paper addresses the problem of automatic temporal annotation of realistic human actions in video using minimal manual supervision. To this end we consider two associated problems: (a) weakly-supervised learning of action models from readily available annotations, and (b) temporal localization of human actions in test videos. To avoid the prohibitive cost of manual annotation for training, we use movie scripts as a means of weak supervision. Scripts, however, provide only implicit, noisy, and imprecise information about the type and location of actions in video. We address this problem with a kernel-based discriminative clustering algorithm that locates actions in the weakly-labeled training data. Using the obtained action samples, we train temporal action detectors and apply them to locate actions in the raw video data. Our experiments demonstrate that the proposed method for weakly-supervised learning of action models leads to significant improvement in action detection. We present detection results for three action classes in four feature length movies with challenging and realistic video data.",
"We advocate the inference of qualitative information about 3D human pose, called posebits, from images. Posebits represent boolean geometric relationships between body parts (e.g., left-leg in front of right-leg or hands close to each other). The advantages of posebits as a mid-level representation are 1) for many tasks of interest, such qualitative pose information may be sufficient (e.g., semantic image retrieval), 2) it is relatively easy to annotate large image corpora with posebits, as it simply requires answers to yes no questions, and 3) they help resolve challenging pose ambiguities and therefore facilitate the difficult talk of image-based 3D pose estimation. We introduce posebits, a posebit database, a method for selecting useful posebits for pose estimation and a structural SVM model for posebit inference. Experiments show the use of posebits for semantic image retrieval and for improving 3D pose estimation.",
"We introduce a weakly supervised approach for learning human actions modeled as interactions between humans and objects. Our approach is human-centric: We first localize a human in the image and then determine the object relevant for the action and its spatial relation with the human. The model is learned automatically from a set of still images annotated only with the action label. Our approach relies on a human detector to initialize the model learning. For robustness to various degrees of visibility, we build a detector that learns to combine a set of existing part detectors. Starting from humans detected in a set of images depicting the action, our approach determines the action object and its spatial relation to the human. Its final output is a probabilistic model of the human-object interaction, i.e., the spatial relation between the human and the object. We present an extensive experimental evaluation on the sports action data set from [1], the PASCAL Action 2010 data set [2], and a new human-object interaction data set.",
"We present a novel algorithm for weakly supervised action classification in videos. We assume we are given training videos annotated only with action class labels. We learn a model that can classify unseen test videos, as well as localize a region of interest in the video that captures the discriminative essence of the action class. A novel Similarity Constrained Latent Support Vector Machine model is developed to operationalize this goal. This model specifies that videos should be classified correctly, and that the latent regions of interest chosen should be coherent over videos of an action class. The resulting learning problem is challenging, and we show how dual decomposition can be employed to render it tractable. Experimental results demonstrate the efficacy of the method."
]
} |
1906.01399 | 2949768491 | For human pose estimation in still images, this paper proposes three semi- and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi- and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi- and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets. | Whereas pose estimation using only action labels is more difficult than human window localization described above, it has been demonstrated that the action-specific property of a human pose is useful for pose estimation (e.g. latent modeling of dynamics @cite_71 @cite_52 , switching dynamical models in videos @cite_37 , efficient particle distribution in multiple pose models in videos @cite_35 @cite_73 , and pose model selection in still images @cite_25 ). | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_52",
"@cite_71",
"@cite_73",
"@cite_25"
],
"mid": [
"1911783559",
"2096044283",
"2042919965",
"2548924727",
"2044159364",
"2025031526"
],
"abstract": [
"3D human pose estimation in multi-view settings benefits from embeddings of human actions in low-dimensional manifolds, but the complexity of the embeddings increases with the number of actions. Creating separate, action-specific manifolds seems to be a more practical solution. Using multiple manifolds for pose estimation, however, requires a joint optimization over the set of manifolds and the human pose embedded in the manifolds. In order to solve this problem, we propose a particle-based optimization algorithm that can efficiently estimate human pose even in challenging in-house scenarios. In addition, the algorithm can directly integrate the results of a 2D action recognition system as prior distribution for optimization. In our experiments, we demonstrate that the optimization handles an 84D search space and provides already competitive results on HumanEva with as few as 25 particles.",
"Traditional dynamical systems used for motion tracking cannot effectively handle high dimensionality of the motion states and composite dynamics. In this paper, to address both issues simultaneously, we propose the marriage of the switching dynamical system and recent Gaussian Process Dynamic Models (GPDM), yielding a new model called the switching GPDM (SGPDM). The proposed switching variables enable the SGPDM to capture diverse motion dynamics effectively, and also allow to identify the motion class (e.g. walk or run in the human motion tracking, smile or angry in the facial motion tracking), which naturally leads to the idea of simultaneous motion tracking and classification. Moreover, each of GPDMs in SGPDM can faithfully model its corresponding primitive motion, while performing tracking in the low-dimensional latent space, therefore significantly improving the tracking efficiency. The proposed SGPDM is then applied to human body motion tracking and classification, and facial motion tracking and recognition. We demonstrate the performance of our model on several composite body motion videos obtained from the CMU database, including exercises and salsa dance. We also demonstrate the robustness of our model in terms of both facial feature tracking and facial expression pose recognition performance on real videos under diverse scenarios including pose change, low frame rate and low quality videos.",
"We propose a unified model for human motion prior with multiple actions. Our model is generated from sample pose sequences of the multiple actions, each of which is recorded from real human motion. The sample sequences are connected to each other by synthesizing a variety of possible transitions among the different actions. For kinematically-realistic transitions, our model integrates nonlinear probabilistic latent modeling of the samples and interpolation-based synthesis of the transition paths. While naive interpolation makes unexpected poses, our model rejects them (1) by searching for smooth and short transition paths by employing the good properties of the observation and latent spaces and (2) by avoiding using samples that unexpectedly synthesize the nonsmooth interpolation. The effectiveness of the model is demonstrated with real data and its application to human pose tracking.",
"We propose a method for estimating the pose of a human body using its approximate 3D volume (visual hull) obtained in real time from synchronized videos. Our method can cope with loose-fitting clothing, which hides the human body and produces non-rigid motions and critical reconstruction errors, as well as tight-fitting clothing. To follow the shape variations robustly against erratic motions and the ambiguity between a reconstructed body shape and its pose, the probabilistic dynamical model of human volumes is learned from training temporal volumes refined by error correction. The dynamical model of a body pose (joint angles) is also learned with its corresponding volume. By comparing the volume model with an input visual hull and regressing its pose from the pose model, pose estimation can be realized. In our method, this is improved by double volume comparison: 1) comparison in a low-dimensional latent space with probabilistic volume models and 2) comparison in an observation volume space using geometric constrains between a real volume and a visual hull. Comparative experiments demonstrate the effectiveness of our method faster than existing methods.",
"This paper proposes human motion models of multiple actions for 3D pose tracking. A training pose sequence of each action, such as walking and jogging, is separately recorded by a motion capture system and modeled independently. This independent modeling of action-specific motions allows us 1) to optimize each model in accordance with only its respective motion and 2) to improve the scalability of the models. Unlike existing approaches with similar motion models (e.g. switching dynamical models), our pose tracking method uses the multiple models simultaneously for coping with ambiguous motions. For robust tracking with the multiple models, particle filtering is employed so that particles are distributed simultaneously in the models. Efficient use of the particles can be achieved by locating many particles in the model corresponding to an action that is currently observed. For transferring the particles among the models in quick response to changes in the action, transition paths are synthesized between the different models in order to virtually prepare inter-action motions. Experimental results demonstrate that the proposed models improve accuracy in pose tracking.",
"This paper proposes an iterative scheme between human action classification and pose estimation in still images. For initial action classification, we employ global image features that represent a scene (e.g. people, background, and other objects), which can be extracted without any difficult human-region segmentation such as pose estimation. This classification gives us the probability estimates of possible actions in a query image. The probability estimates are used to evaluate the results of pose estimation using action-specific models. The estimated pose is then merged with the global features for action re-classification. This iterative scheme can mutually improve action classification and pose estimation. Experimental results with a public dataset demonstrate the effectiveness of global features for initialization, action-specific models for pose estimation, and action classification with global and pose features."
]
} |
1906.01356 | 2948862381 | We consider a setting where a stream of qubits is processed sequentially. We derive fundamental limits on the rate at which classical information can be transmitted using qubits that decohere as they wait to be processed. Specifically, we model the sequential processing of qubits using a single server queue, and derive expressions for the classical capacity of such a quantum queue-channel.' Focusing on quantum erasures, we obtain an explicit single-letter capacity formula in terms of the stationary waiting time of qubits in the queue. Our capacity proof also implies that a classical' coding decoding strategy is optimal, i.e., an encoder which uses only orthogonal product states, and a decoder which measures in a fixed product basis, are sufficient to achieve the classical capacity of the quantum erasure queue-channel. More broadly, our work begins to quantitatively address the impact of decoherence on the performance limits of quantum information processing systems. | As an aside, we remark that the erasure queue-channel treated in @cite_8 can be used to model a multimedia-streaming scenario, where information packets become useless (erased) after a certain time. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2964091750"
],
"abstract": [
"We consider a setting where qubits are processed sequentially, and derive fundamental limits on the rate at which classical information can be transmitted using quantum states that decohere in time. Specifically, we model the sequential processing of qubits using a single server queue, and derive explicit expressions for the capacity of such a ‘queue-channel.’ We also demonstrate a sweet-spot phenomenon with respect to the arrival rate to the queue, i.e., we show that there exists a value of the arrival rate of the qubits at which the rate of information transmission (in bits sec) through the queue-channel is maximised. Next, we consider a setting where the average rate of processing qubits is fixed, and show that the capacity of the queue-channel is maximised when the processing time is deterministic. We also discuss design implications of these results on quantum information processing systems."
]
} |
1906.01463 | 2948952554 | Common test generators fall into two categories. Generating test inputs at the unit level is fast, but can lead to false alarms when a function is called with inputs that would not occur in a system context. If a generated input at the system level causes a failure, this is a true alarm, as the input could also have come from the user or a third party; but system testing is much slower. In this paper, we introduce the concept of a test generation bridge, which joins the accuracy of system testing with the speed of unit testing. A Test Generation Bridge allows to combine an arbitrary system test generator with an arbitrary unit test generator. It does so by carving parameterized unit tests from system (test) executions. These unit tests run in a context recorded from the system test, but individual parameters are left free for the unit test generator to systematically explore. This allows symbolic test generators such as KLEE to operate on individual functions in the recorded system context. If the test generator detects a failure, we lift the failure-inducing parameter back to the system input; if the failure can be reproduced at the system level, it is reported as a true alarm. Our BASILISK prototype can extract and test units out of complex systems such as a Web Python SQLite C stack; in its evaluation, it achieves a higher coverage than a state-of-the-art system test generator. | The idea of is an old one: To test a program @math , a producer @math will generate inputs for @math with the intent to cause it to fail. To find bugs, a producer need not be very sophisticated; as shown in the famous fuzzing'' paper of 1989, simple random strings can quickly crash programs @cite_21 . | {
"cite_N": [
"@cite_21"
],
"mid": [
"2002934700"
],
"abstract": [
"The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig."
]
} |
1906.01463 | 2948952554 | Common test generators fall into two categories. Generating test inputs at the unit level is fast, but can lead to false alarms when a function is called with inputs that would not occur in a system context. If a generated input at the system level causes a failure, this is a true alarm, as the input could also have come from the user or a third party; but system testing is much slower. In this paper, we introduce the concept of a test generation bridge, which joins the accuracy of system testing with the speed of unit testing. A Test Generation Bridge allows to combine an arbitrary system test generator with an arbitrary unit test generator. It does so by carving parameterized unit tests from system (test) executions. These unit tests run in a context recorded from the system test, but individual parameters are left free for the unit test generator to systematically explore. This allows symbolic test generators such as KLEE to operate on individual functions in the recorded system context. If the test generator detects a failure, we lift the failure-inducing parameter back to the system input; if the failure can be reproduced at the system level, it is reported as a true alarm. Our BASILISK prototype can extract and test units out of complex systems such as a Web Python SQLite C stack; in its evaluation, it achieves a higher coverage than a state-of-the-art system test generator. | To get deeper than scanning and parsing routines, though, one requires syntactically correct inputs. To this end, one can use formal specifications of the input language to generate inputs---for instance, leveraging as producers @cite_23 . The test generator @cite_22 uses its grammar for existing inputs as well and can thus combine existing with newly generated input fragments. | {
"cite_N": [
"@cite_22",
"@cite_23"
],
"mid": [
"1531203382",
"1983878972"
],
"abstract": [
"Fuzz testing is an automated technique providing random data as input to a software system in the hope to expose a vulnerability. In order to be effective, the fuzzed input must be common enough to pass elementary consistency checks; a JavaScript interpreter, for instance, would only accept a semantically valid program. On the other hand, the fuzzed input must be uncommon enough to trigger exceptional behavior, such as a crash of the interpreter. The LangFuzz approach resolves this conflict by using a grammar to randomly generate valid programs; the code fragments, however, partially stem from programs known to have caused invalid behavior before. LangFuzz is an effective tool for security testing: Applied on the Mozilla JavaScript interpreter, it discovered a total of 105 new severe vulnerabilities within three months of operation (and thus became one of the top security bug bounty collectors within this period); applied on the PHP interpreter, it discovered 18 new defects causing crashes.",
"A fast algorithm is given to produce a small set of short sentences from a context free grammar such that each production of the grammar is used at least once. The sentences are useful for testing parsing programs and for debugging grammars (finding errors in a grammar which causes it to specify some language other than the one intended). Some experimental results from using the sentences to test some automatically generated simpleLR(1) parsers are also given."
]
} |
1906.01463 | 2948952554 | Common test generators fall into two categories. Generating test inputs at the unit level is fast, but can lead to false alarms when a function is called with inputs that would not occur in a system context. If a generated input at the system level causes a failure, this is a true alarm, as the input could also have come from the user or a third party; but system testing is much slower. In this paper, we introduce the concept of a test generation bridge, which joins the accuracy of system testing with the speed of unit testing. A Test Generation Bridge allows to combine an arbitrary system test generator with an arbitrary unit test generator. It does so by carving parameterized unit tests from system (test) executions. These unit tests run in a context recorded from the system test, but individual parameters are left free for the unit test generator to systematically explore. This allows symbolic test generators such as KLEE to operate on individual functions in the recorded system context. If the test generator detects a failure, we lift the failure-inducing parameter back to the system input; if the failure can be reproduced at the system level, it is reported as a true alarm. Our BASILISK prototype can extract and test units out of complex systems such as a Web Python SQLite C stack; in its evaluation, it achieves a higher coverage than a state-of-the-art system test generator. | Today's most popular test generators take which they mutate in various ways to generate further inputs. or AFLFuzz, combines mutation with search-based testing and thus systematically maximizes code coverage @cite_0 . More sophisticated fuzzers rely on to automatically determine inputs that maximize coverage of control or data paths @cite_15 . The tool @cite_5 is a popular symbolic tester for <C> programs. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_15"
],
"mid": [
"",
"1710734607",
"157156687"
],
"abstract": [
"",
"We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage -- on average over 90 per tool (median: over 94 ) -- and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100 coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.",
"Fuzz testing is an effective technique for finding security vulnerabilities in software. Traditionally, fuzz testing tools apply random mutations to well-formed inputs of a program and test the resulting values. We present an alternative whitebox fuzz testing approach inspired by recent advances in symbolic execution and dynamic test generation. Our approach records an actual run of the program under test on a well-formed input, symbolically evaluates the recorded trace, and gathers constraints on inputs capturing how the program uses these. The collected constraints are then negated one by one and solved with a constraint solver, producing new inputs that exercise different control paths in the program. This process is repeated with the help of a code-coverage maximizing heuristic designed to find defects as fast as possible. We have implemented this algorithm in SAGE (Scalable, Automated, Guided Execution), a new tool employing x86 instruction-level tracing and emulation for whitebox fuzzing of arbitrary file-reading Windows applications. We describe key optimizations needed to make dynamic test generation scale to large input files and long execution traces with hundreds of millions of instructions. We then present detailed experiments with several Windows applications. Notably, without any format-specific knowledge, SAGE detects the MS07-017 ANI vulnerability, which was missed by extensive blackbox fuzzing and static analysis tools. Furthermore, while still in an early stage of development, SAGE has already discovered 30+ new bugs in large shipped Windows applications including image processors, media players, and file decoders. Several of these bugs are potentially exploitable memory access violations."
]
} |
1906.01463 | 2948952554 | Common test generators fall into two categories. Generating test inputs at the unit level is fast, but can lead to false alarms when a function is called with inputs that would not occur in a system context. If a generated input at the system level causes a failure, this is a true alarm, as the input could also have come from the user or a third party; but system testing is much slower. In this paper, we introduce the concept of a test generation bridge, which joins the accuracy of system testing with the speed of unit testing. A Test Generation Bridge allows to combine an arbitrary system test generator with an arbitrary unit test generator. It does so by carving parameterized unit tests from system (test) executions. These unit tests run in a context recorded from the system test, but individual parameters are left free for the unit test generator to systematically explore. This allows symbolic test generators such as KLEE to operate on individual functions in the recorded system context. If the test generator detects a failure, we lift the failure-inducing parameter back to the system input; if the failure can be reproduced at the system level, it is reported as a true alarm. Our BASILISK prototype can extract and test units out of complex systems such as a Web Python SQLite C stack; in its evaluation, it achieves a higher coverage than a state-of-the-art system test generator. | tools operate by generating random function calls, which are then executed. A typical representative of this class is the popular @cite_10 tool. Random calls can be systematically refined towards a given goal: EvoSuite @cite_20 uses a search-based approach to evolve generated call sequences towards maximizing code coverage. | {
"cite_N": [
"@cite_10",
"@cite_20"
],
"mid": [
"1965194038",
"2122205205"
],
"abstract": [
"R ANDOOP for Java generates unit tests for Java code using feedback-directed random test generation. Below we describe R ANDOOP 's input, output, and test generation algorithm. We also give an overview of R ANDOOP 's annotation-based interface for specifying configuration parameters that affect R ANDOOP 's behavior and output.",
"Recent advances in software testing allow automatic derivation of tests that reach almost any desired point in the source code. There is, however, a fundamental problem with the general idea of targeting one distinct test coverage goal at a time: Coverage goals are neither independent of each other, nor is test generation for any particular coverage goal guaranteed to succeed. We present EvoSuite, a search-based approach that optimizes whole test suites towards satisfying a coverage criterion, rather than generating distinct test cases directed towards distinct coverage goals. Evaluated on five open source libraries and an industrial case study, we show that EvoSuite achieves up to 18 times the coverage of a traditional approach targeting single branches, with up to 44 smaller test suites."
]
} |
1906.01463 | 2948952554 | Common test generators fall into two categories. Generating test inputs at the unit level is fast, but can lead to false alarms when a function is called with inputs that would not occur in a system context. If a generated input at the system level causes a failure, this is a true alarm, as the input could also have come from the user or a third party; but system testing is much slower. In this paper, we introduce the concept of a test generation bridge, which joins the accuracy of system testing with the speed of unit testing. A Test Generation Bridge allows to combine an arbitrary system test generator with an arbitrary unit test generator. It does so by carving parameterized unit tests from system (test) executions. These unit tests run in a context recorded from the system test, but individual parameters are left free for the unit test generator to systematically explore. This allows symbolic test generators such as KLEE to operate on individual functions in the recorded system context. If the test generator detects a failure, we lift the failure-inducing parameter back to the system input; if the failure can be reproduced at the system level, it is reported as a true alarm. Our BASILISK prototype can extract and test units out of complex systems such as a Web Python SQLite C stack; in its evaluation, it achieves a higher coverage than a state-of-the-art system test generator. | techniques symbolically solve to generate inputs that reach as much code as possible. PEX @cite_18 fulfills a similar role for .NET programs, working on in which individual function parameters are treated symbolically. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2110311336"
],
"abstract": [
"Pex automatically produces a small test suite with high code coverage for a .NET program. To this end, Pex performs a systematic program analysis (using dynamic symbolic execution, similar to path-bounded model-checking) to determine test inputs for Parameterized Unit Tests. Pex learns the program behavior by monitoring execution traces. Pex uses a constraint solver to produce new test inputs which exercise different program behavior. The result is an automatically generated small test suite which often achieves high code coverage. In one case study, we applied Pex to a core component of the .NET runtime which had already been extensively tested over several years. Pex found errors, including a serious issue."
]
} |
1906.01463 | 2948952554 | Common test generators fall into two categories. Generating test inputs at the unit level is fast, but can lead to false alarms when a function is called with inputs that would not occur in a system context. If a generated input at the system level causes a failure, this is a true alarm, as the input could also have come from the user or a third party; but system testing is much slower. In this paper, we introduce the concept of a test generation bridge, which joins the accuracy of system testing with the speed of unit testing. A Test Generation Bridge allows to combine an arbitrary system test generator with an arbitrary unit test generator. It does so by carving parameterized unit tests from system (test) executions. These unit tests run in a context recorded from the system test, but individual parameters are left free for the unit test generator to systematically explore. This allows symbolic test generators such as KLEE to operate on individual functions in the recorded system context. If the test generator detects a failure, we lift the failure-inducing parameter back to the system input; if the failure can be reproduced at the system level, it is reported as a true alarm. Our BASILISK prototype can extract and test units out of complex systems such as a Web Python SQLite C stack; in its evaluation, it achieves a higher coverage than a state-of-the-art system test generator. | Compared to the system level, test generation at the unit level is very efficient, as a function call takes less time than a system invocation or interaction; furthermore, exhaustive and symbolic techniques are easier to deploy due to the smaller scale. The downside is that generated function calls may lack realistic context, which makes exploration harder; and function failures may be false alarms because of violated implicit preconditions. @cite_11 devised a technique to use the static calling context of a function in unit-testing. They report high bug detection ability and high precision, but still some false positives. In contrast, we derive the context from execution, that is, with a dynamic analysis. Also, validating all unit-level failures at the system level means that we can recover from false alarms and any remaining failures are true failures. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2885994992"
],
"abstract": [
"Automated unit testing reduces manual effort to write unit test drivers stubs and generate unit test inputs. However, automatically generated unit test drivers stubs raise false alarms because they often over-approximate real contexts of a target function f and allow infeasible executions of f. To solve this problem, we have developed a concolic unit testing technique CONBRIO. To provide realistic context to f, it constructs an extended unit of f that consists of f and closely relevant functions to f. Also, CONBRIO filters out a false alarm by checking feasibility of a corresponding symbolic execution path with regard to f's symbolic calling contexts obtained by combining symbolic execution paths of f's closely related predecessor functions. In the experiments on the crash bugs of 15 real-world C programs, CONBRIO shows both high bug detection ability (i.e. 91.0 of the target bugs detected) and high precision (i.e. a true to false alarm ratio is 1:4.5). Also, CONBRIO detects 14 new bugs in 9 target C programs studied in papers on crash bug detection techniques."
]
} |
1906.01463 | 2948952554 | Common test generators fall into two categories. Generating test inputs at the unit level is fast, but can lead to false alarms when a function is called with inputs that would not occur in a system context. If a generated input at the system level causes a failure, this is a true alarm, as the input could also have come from the user or a third party; but system testing is much slower. In this paper, we introduce the concept of a test generation bridge, which joins the accuracy of system testing with the speed of unit testing. A Test Generation Bridge allows to combine an arbitrary system test generator with an arbitrary unit test generator. It does so by carving parameterized unit tests from system (test) executions. These unit tests run in a context recorded from the system test, but individual parameters are left free for the unit test generator to systematically explore. This allows symbolic test generators such as KLEE to operate on individual functions in the recorded system context. If the test generator detects a failure, we lift the failure-inducing parameter back to the system input; if the failure can be reproduced at the system level, it is reported as a true alarm. Our BASILISK prototype can extract and test units out of complex systems such as a Web Python SQLite C stack; in its evaluation, it achieves a higher coverage than a state-of-the-art system test generator. | A number of related works has focused on obtaining parameterized unit tests by starting from existing or generated unit tests. Retrofitting of unit tests @cite_14 is an approach where existing unit tests are converted to parameterized unit tests, by identifying inputs and converting them to parameters. The technique of Fraser and Zeller @cite_4 starts from concrete inputs and results, using test generation and mutation to systematically generalize the pre- and postconditions of existing unit tests. The recently presented tool @cite_7 generalizes over a set of related unit tests to extract common procedures and unique parameters to obtain parametrized unit tests. In contrast to all these works, our technique carves parameterized unit tests directly out of a given run, identifying those values as parameters that are present in system input. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_7"
],
"mid": [
"1529754292",
"2103275664",
"2811006300"
],
"abstract": [
"Recent advances in software testing introduced parameterized unit tests (PUT), which accept parameters, unlike conventional unit tests (CUT), which do not accept parameters. PUTs are more beneficial than CUTs with regards to fault-detection capability, since PUTs help describe the behaviors of methods under test for all test arguments. In general, existing applications often include manually written CUTs. With the existence of these CUTs, natural questions that arise are whether these CUTs can be retrofitted as PUTs to leverage the benefits of PUTs, and what are the cost and benefits involved in retrofitting CUTs as PUTs. To address these questions, in this paper, we conduct an empirical study to investigate whether existing CUTs can be retrofitted as PUTs with feasible effort and achieve the benefits of PUTs in terms of additional fault-detection capability and code coverage. We also propose a methodology, called test generalization, that helps in systematically retrofitting existing CUTs as PUTs. Our results on three real-world open-source applications (≈ 4.6 KLOC) show that the retrofitted PUTs detect 19 new defects that are not detected by existing CUTs, and also increase branch coverage by 4 on average (with maximum increase of 52 for one class under test and 10 for one application under analysis) with feasible effort.",
"State-of-the art techniques for automated test generation focus on generating executions that cover program behavior. As they do not generate oracles, it is up to the developer to figure out what a test does and how to check the correctness of the observed behavior. In this paper, we present an approach to generate parameterized unit tests---unit tests containing symbolic pre- and postconditions characterizing test input and test result. Starting from concrete inputs and results, we use test generation and mutation to systematically generalize pre- and postconditions while simplifying the computation steps. Evaluated on five open source libraries, the generated parameterized unit tests are (a) more expressive, characterizing general rather than concrete behavior; (b) need fewer computation steps, making them easier to understand; and (c) achieve a higher coverage than regular unit tests.",
"Parameterized unit testing is a promising technique for developers to use to facilitate the understanding of test codes. However, as a practical issue, developers might not have sufficient resources to implement parameterized unit tests (PUTs) corresponding to a vast number of closed unit tests (CUTs) in long-term software projects. Although a technique for retrofitting CUTs into PUTs was proposed, it imposes a laborious task on developers to promote parameters in CUTs. In this study, we propose a fully automated CUT-PUT retrofitting technique (called AutoPUT), which detects similar CUTs as PUT candidates by comparing their code structures. It then identifies common procedures and unique parameters to generate PUTs without degradation in terms of code coverage as compared with original CUTs. From the results of our case-study experiments on open-sourced software projects, we found that AutoPUT fully automatically generated 204 PUTs in 8.5 hours. We concluded that AutoPUT can help developers maintain test suites for building reliable software."
]
} |
1906.01392 | 2948908918 | We identify agreement and disagreement between utterances that express stances towards a topic of discussion. Existing methods focus mainly on conversational settings, where dialogic features are used for (dis)agreement inference. We extend this scope and seek to detect stance (dis)agreement in a broader setting, where independent stance-bearing utterances, which prevail in many stance corpora and real-world scenarios, are compared. To cope with such non-dialogic utterances, we find that the reasons uttered to back up a specific stance can help predict stance (dis)agreements. We propose a reason comparing network (RCN) to leverage reason information for stance comparison. Empirical results on a well-known stance corpus show that our method can discover useful reason information, enabling it to outperform several baselines in stance (dis)agreement detection. | Our work is mostly related to the task of detecting agreement and disagreement in online discussions. Recent studies have mainly focused on classifying (dis)agreement in dialogues @cite_17 @cite_19 @cite_4 @cite_14 . In these studies, various features (e.g., structural, linguistic) and or specialised lexicons are proposed to recognise (dis)agreement in different dialogic scenarios. In contrast, we detect stance (dis)agreement between independent utterances where dialogic features are absent. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_4",
"@cite_17"
],
"mid": [
"2964079262",
"2145747781",
"2963308744",
"1518903672"
],
"abstract": [
"We study the problem of agreement and disagreement detection in online discussions. An isotonic Conditional Random Fields (isotonic CRF) based sequential model is proposed to make predictions on sentence- or segment-level. We automatically construct a socially-tuned lexicon that is bootstrapped from existing general-purpose sentiment lexicons to further improve the performance. We evaluate our agreement and disagreement tagging model on two disparate online discussion corpora ‐ Wikipedia Talk pages and online debates. Our model is shown to outperform the state-of-the-art approaches in both datasets. For example, the isotonic CRF model achieves F1 scores of 0.74 and 0.67 for agreement and disagreement detection, when a linear chain CRF obtains 0.58 and 0.56 for the discussions on Wikipedia Talk pages.",
"Casual online forums such as Reddit, Slashdot and Digg, are continuing to increase in popularity as a means of communication. Detecting disagreement in this domain is a considerable challenge. Many topics are unique to the conversation on the forum, and the appearance of disagreement may be much more subtle than on political blogs or social media sites such as twitter. In this analysis we present a crowd-sourced annotated corpus for topic level disagreement detection in Slashdot, showing that disagreement detection in this domain is difficult even for humans. We then proceed to show that a new set of features determined from the rhetorical structure of the conversation significantly improves the performance on disagreement detection over a baseline consisting of unigram bigram features, discourse markers, structural features and meta-post features.",
"Research on the structure of dialogue has been hampered for years because large dialogue corpora have not been available. This has impacted the dialogue research community’s ability to develop better theories, as well as good off-the-shelf tools for dialogue processing. Happily, an increasing amount of information and opinion exchange occur in natural dialogue in online forums, where people share their opinions about a vast range of topics. In particular we are interested in rejection in dialogue, also called disagreement and denial, where the size of available dialogue corpora, for the first time, offers an opportunity to empirically test theoretical accounts of the expression and inference of rejection in dialogue. In this paper, we test whether topic-independent features motivated by theoretical predictions can be used to recognize rejection in online forums in a topic-independent way. Our results show that our theoretically motivated features achieve 66 accuracy, an improvement over a unigram baseline of an absolute 6 .",
"The recent proliferation of political and social forums has given rise to a wealth of freely accessible naturalistic arguments. People can \"talk\" to anyone they want, at any time, in any location, about any topic. Here we use a Mechanical Turk annotated corpus of forum discussions as a gold standard for the recognition of disagreement in online ideological forums. We analyze the utility of meta-post features, contextual features, dependency features and word-based features for signaling the disagreement relation. We show that using contextual and dialogic features we can achieve accuracies up to 68 as compared to a unigram baseline of 63 ."
]
} |
1906.01392 | 2948908918 | We identify agreement and disagreement between utterances that express stances towards a topic of discussion. Existing methods focus mainly on conversational settings, where dialogic features are used for (dis)agreement inference. We extend this scope and seek to detect stance (dis)agreement in a broader setting, where independent stance-bearing utterances, which prevail in many stance corpora and real-world scenarios, are compared. To cope with such non-dialogic utterances, we find that the reasons uttered to back up a specific stance can help predict stance (dis)agreements. We propose a reason comparing network (RCN) to leverage reason information for stance comparison. Empirical results on a well-known stance corpus show that our method can discover useful reason information, enabling it to outperform several baselines in stance (dis)agreement detection. | Reason information has been found useful in argumentation mining @cite_13 , where studies leverage stance and reason signals for various argumentation tasks @cite_5 @cite_18 @cite_2 . We study how to exploit the reason information to better understand the stance, thus addressing a different task. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_13",
"@cite_2"
],
"mid": [
"2150248423",
"2250397934",
"2327805699",
"2250730878"
],
"abstract": [
"Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts.",
"In online discussions, users often back up their stance with arguments. Their arguments are often vague, implicit, and poorly worded, yet they provide valuable insights into reasons underpinning users’ opinions. In this paper, we make a first step towards argument-based opinion mining from online discussions and introduce a new task of argument recognition. We match usercreated comments to a set of predefined topic-based arguments, which can be either attacked or supported in the comment. We present a manually-annotated corpus for argument recognition in online discussions. We describe a supervised model based on comment-argument similarity and entailment features. Depending on problem formulation, model performance ranges from 70.5 to 81.8 F1-score, and decreases only marginally when applied to an unseen topic.",
"Argumentation mining aims at automatically extracting structured arguments from unstructured textual documents. It has recently become a hot topic also due to its potential in processing information originating from the Web, and in particular from social media, in innovative ways. Recent advances in machine learning methods promise to enable breakthrough applications to social and economic sciences, policy making, and information technology: something that only a few years ago was unthinkable. In this survey article, we introduce argumentation models and methods, review existing systems and applications, and discuss challenges and perspectives of this exciting new research area.",
"Argumentation mining and stance classification were recently introduced as interesting tasks in text mining. In this paper, a novel framework for argument tagging based on topic modeling is proposed. Unlike other machine learning approaches for argument tagging which often require large set of labeled data, the proposed model is minimally supervised and merely a one-to-one mapping between the pre-defined argument set and the extracted topics is required. These extracted arguments are subsequently exploited for stance classification. Additionally, a manuallyannotated corpus for stance classification and argument tagging of online news comments is introduced and made available. Experiments on our collected corpus demonstrate the benefits of using topic-modeling for argument tagging. We show that using Non-Negative Matrix Factorization instead of Latent Dirichlet Allocation achieves better results for argument classification, close to the results of a supervised classifier. Furthermore, the statistical model that leverages automatically-extracted arguments as features for stance classification shows promising results."
]
} |
1906.01392 | 2948908918 | We identify agreement and disagreement between utterances that express stances towards a topic of discussion. Existing methods focus mainly on conversational settings, where dialogic features are used for (dis)agreement inference. We extend this scope and seek to detect stance (dis)agreement in a broader setting, where independent stance-bearing utterances, which prevail in many stance corpora and real-world scenarios, are compared. To cope with such non-dialogic utterances, we find that the reasons uttered to back up a specific stance can help predict stance (dis)agreements. We propose a reason comparing network (RCN) to leverage reason information for stance comparison. Empirical results on a well-known stance corpus show that our method can discover useful reason information, enabling it to outperform several baselines in stance (dis)agreement detection. | Our work is also related to the tasks on textual relationship inference, such as textual entailment @cite_9 , paraphrase detection @cite_11 , and question answering @cite_12 . Unlike the textual relationships addressed in those tasks, the relationships between utterances expressing stances do not necessarily contain any rephrasing or entailing semantics, but they do carry discourse signals (e.g., reasons) related to stance expressing. | {
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_11"
],
"mid": [
"1840435438",
"2511929605",
"2294860948"
],
"abstract": [
"Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce the Stanford Natural Language Inference corpus, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. This increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.",
"",
"We present a new deep learning architecture Bi-CNN-MI for paraphrase identification (PI). Based on the insight that PI requires comparing two sentences on multiple levels of granularity, we learn multigranular sentence representations using convolutional neural network (CNN) and model interaction features at each level. These features are then the input to a logistic classifier for PI. All parameters of the model (for embeddings, convolution and classification) are directly optimized for PI. To address the lack of training data, we pretrain the network in a novel way using a language modeling task. Results on the MSRP corpus surpass that of previous NN competitors."
]
} |
1906.01376 | 2948952637 | Data-driven models are subject to model errors due to limited and noisy training data. Key to the application of such models in safety-critical domains is the quantification of their model error. Gaussian processes provide such a measure and uniform error bounds have been derived, which allow safe control based on these models. However, existing error bounds require restrictive assumptions. In this paper, we employ the Gaussian process distribution and continuity arguments to derive a novel uniform error bound under weaker assumptions. Furthermore, we demonstrate how this distribution can be used to derive probabilistic Lipschitz constants and analyze the asymptotic behavior of our bound. Finally, we derive safety conditions for the control of unknown dynamical systems based on Gaussian process models and evaluate them in simulations of a robotic manipulator. | Regularized kernel regression is a method which extends many ideas from scattered data interpolation to noisy observations and it is highly related to Gaussian process regression as pointed out in @cite_25 . In fact, the GP posterior mean function is identical to kernel ridge regression with squared cost function @cite_30 . Many error bounds such as @cite_12 depend on the empirical @math covering number and the norm of the unknown function in the RKHS attached to the regression kernel. In @cite_34 the effective dimension of the feature space, in which regression is performed, is employed to derive a uniform error bound. The effect of approximations of the kernel, e.g. with the Nystr "om method, on the regression error is analyzed in @cite_10 . Tight error bounds using empirical @math covering numbers are derived under mild assumptions in @cite_1 . Finally, error bounds for general regularization are developed in @cite_4 , which depend on regularization and the RKHS norm of the function. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_1",
"@cite_34",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"1746819321",
"2597846060",
"1971909566",
"2044514896",
"2171792531",
"2880842812",
"2100116519"
],
"abstract": [
"Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.",
"",
"Abstract We consider a coefficient-based regularized regression in a data dependent hypothesis space. For a given set of samples, functions in this hypothesis space are defined to be linear combinations of basis functions generated by a kernel function and sample data. We do not need the kernel to be symmetric or positive semi-definite, which provides flexibility and adaptivity for the learning algorithm. Another advantage of this algorithm is that, it is computationally effective without any optimization processes. In this paper, we apply concentration techniques with l 2 -empirical covering numbers to present an elaborate capacity dependent analysis for the algorithm, which yields shaper estimates in both confidence estimation and convergence rate. When the kernel is C ∞ , under a very mild regularity condition on the regression function, the rate can be arbitrarily close to m − 1 .",
"Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the \"curse-of-dimensionality\" phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.",
"Kernel approximation is commonly used to scale kernel-based algorithms to applications containing as many as several million instances. This paper analyzes the effect of such approximations in the kernel matrix on the hypothesis generated by several widely used learning algorithms. We give stability bounds based on the norm of the kernel approximation for these algorithms, including SVMs, KRR, and graph Laplacian-based regularization algorithms. These bounds help determine the degree of approximation that can be tolerated in the estimation of the kernel matrix. Our analysis is general and applies to arbitrary approximations of the kernel matrix. However, we also give a specific analysis of the Nystr ¨ om low-rank approximation in this context and report the results of experiments evaluating the ′ − �| =",
"This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other. It is widely known in machine learning that these two formalisms are closely related; for instance, the estimator of kernel ridge regression is identical to the posterior mean of Gaussian process regression. However, they have been studied and developed almost independently by two essentially separate communities, and this makes it difficult to seamlessly transfer results between them. Our aim is to overcome this potential difficulty. To this end, we review several old and new results and concepts from either side, and juxtapose algorithmic quantities from each framework to highlight close similarities. We also provide discussions on subtle philosophical and theoretical differences between the two approaches.",
"We study the sample complexity of proper and improper learning problems with respect to different q-loss functions. We improve the known estimates for classes which have relatively small covering numbers in empirical L sub 2 spaces (e.g. log-covering numbers which are polynomial with exponent p<2). We present several examples of relevant classes which have a \"small\" fat-shattering dimension, and hence fit our setup, the most important of which are kernel machines."
]
} |
1906.01376 | 2948952637 | Data-driven models are subject to model errors due to limited and noisy training data. Key to the application of such models in safety-critical domains is the quantification of their model error. Gaussian processes provide such a measure and uniform error bounds have been derived, which allow safe control based on these models. However, existing error bounds require restrictive assumptions. In this paper, we employ the Gaussian process distribution and continuity arguments to derive a novel uniform error bound under weaker assumptions. Furthermore, we demonstrate how this distribution can be used to derive probabilistic Lipschitz constants and analyze the asymptotic behavior of our bound. Finally, we derive safety conditions for the control of unknown dynamical systems based on Gaussian process models and evaluate them in simulations of a robotic manipulator. | Using similar RKHS based methods for Gaussian process regression, uniform error bounds depending on the maximum information gain and the RKHS norm have been developed in @cite_16 . While regularized kernel regression allows a wide range of observation noise distributions, the bound in @cite_16 only holds for bounded sub-Gaussian noise. Based on this work an improved bound is derived in @cite_31 in order to analyze the regret of an upper confidence bound algorithm in multi-armed bandit problems. Although these bounds are frequently used in safe reinforcement learning and control, they suffer from several issues. On the hand they depend on constants which are very difficult to calculate. While this is no problem for theoretical analysis, it prohibits the integration of these bounds into algorithms and often estimates of the constants must be used. On the other hand, they suffer from the general problem of RKHS approaches that the space of functions, for which the bounds hold, becomes smaller the smoother the kernel is @cite_14 . In fact, the RKHS attached to a covariance kernel is usual small compared to the support of the prior distribution of a Gaussian process @cite_13 . | {
"cite_N": [
"@cite_14",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"2161591817",
"2963271096",
"2166566250",
"1543474109"
],
"abstract": [
"Error estimates for scattered-data interpolation via radial basis functions (RBFs) for target functions in the associated reproducing kernel Hilbert space (RKHS) have been known for a long time. Recently, these estimates have been extended to apply to certain classes of target functions generating the data which are outside the associated RKHS. However, these classes of functions still were not \"large\" enough to be applicable to a number of practical situations. In this paper we obtain Sobolev-type error estimates on compact regions of Rn when the RBFs have Fourier transforms that decay algebraically. In addition, we derive a Bernstein inequality for spaces of finite shifts of an RBF in terms of the minimal separation parameter.",
"",
"Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.",
"We consider the quality of learning a response function by a nonparametric Bayesian approach using a Gaussian process (GP) prior on the response function. We upper bound the quadratic risk of the learning procedure, which in turn is an upper bound on the Kullback-Leibler information between the predictive and true data distribution. The upper bound is expressed in small ball probabilities and concentration measures of the GP prior. We illustrate the computation of the upper bound for the Matern and squared exponential kernels. For these priors the risk, and hence the information criterion, tends to zero for all continuous response functions. However, the rate at which this happens depends on the combination of true response function and Gaussian prior, and is expressible in a certain concentration function. In particular, the results show that for good performance, the regularity of the GP prior should match the regularity of the unknown response function."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.