aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1905.12534
2946841533
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
In a similar manner, given the input feature tensor of a convolutional layer @math , , @cite_22 suggest to factorize it along channel dimension into two groups, one for low frequencies and one for high frequencies @math arguing that the subset of the feature maps that capture spatially low frequency changes, contains spatially redundant information. In order to reduce the spatial redundancy, they introduce the octave feature representation, which corresponds to a division of the spatial dimensions by 2.
{ "cite_N": [ "@cite_22" ], "mid": [ "2938458886" ], "abstract": [ "In natural images, information is conveyed at different frequencies where higher frequencies are usually encoded with fine details and lower frequencies are usually encoded with global structures. Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies. In this work, we propose to factorize the mixed feature maps by their frequencies and design a novel Octave Convolution (OctConv) operation to store and process feature maps that vary spatially \"slower\" at a lower spatial resolution reducing both memory and computation cost. Unlike existing multi-scale meth-ods, OctConv is formulated as a single, generic, plug-and-play convolutional unit that can be used as a direct replacement of (vanilla) convolutions without any adjustments in the network architecture. It is also orthogonal and complementary to methods that suggest better topologies or reduce channel-wise redundancy like group or depth-wise convolutions. We experimentally show that by simply replacing con-volutions with OctConv, we can consistently boost accuracy for both image and video recognition tasks, while reducing memory and computational cost. An OctConv-equipped ResNet-152 can achieve 82.9 top-1 classification accuracy on ImageNet with merely 22.2 GFLOPs." ] }
1905.12548
2947425643
In this paper, we introduce a system to collect product information from manufacturers and make it available in tools that are used for concurrent design of spacecraft. The planning of a spacecraft needs experts from different disciplines, like propulsion, power, and thermal. Since these different disciplines rely on each other there is a high need for communication between them, which is often realized by a Model-Based Systems Engineering (MBSE) process and corresponding tools. We show by comparison that the product information provided by manufacturers often does not match the information needed by MBSE tools on a syntactic or semantic level. The information from manufacturers is also currently not available in machine-readable formats. Afterwards, we present a prototype of a system that makes product information from manufacturers directly available in MBSE tools, in a machine-readable way.
The specification of information exchange between manufacturers and MBSE tools does not only include technical protocols but also the information on which data is relevant in which context and what is its semantic meaning. Tailored modeling languages and tools are required to describe the semantic model of a spacecraft. looked into existing languages and tools that can be used to describe Conceptual Data Models (CDMs). They conclude that none of them are ideal for this task @cite_28 . In the same year, Ait- introduced a special ontology modeling language (PLIB) for engineering in general @cite_9 . Following their former analysis, developed a conceptual data modeling language, SCDML @cite_1 , and also an ontology to describe space system design data @cite_15 . also mentioned the benefits of ontologies and Web Ontology Language (OWL) together with the Eclipse Modeling Framework (EMF) for the MBSE process @cite_14 @cite_19 @cite_26 . On top of that, they built the Semantic Engineering Modeling Framework (SEMF) @cite_7 . MARVL CIP @cite_13 is a platform that aims to support the information exchange between agencies and manufacturers across the whole life cycle of the spacecraft.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_7", "@cite_28", "@cite_9", "@cite_1", "@cite_19", "@cite_15", "@cite_13" ], "mid": [ "2766946944", "2765107838", "2592547903", "2071243256", "1982499277", "2344481184", "2615577556", "2522994157", "" ], "abstract": [ "Engineering phase specific tailored views on data are required to prevent over-engineering and to discover missing elements. This paper presents an approach to enhance a semantic Conceptual Data Model (CDM) to define the specific behavior of engineering phases. Therefore, a CDM gets enhanced by an ontology profile to specify required and non-admissible features for each engineering phase. This is exploited to enhance exis­ting transformations from Web Ontology Language (OWL) to Eclipse Modeling Language (EMF) to generate phase specific decorators representing the corresponding views on data items. The approach also supports engineering phase specific handling of constraints and rules that further augment the guidance of engineers through the system development process by applying knowledge management functions such as reasoning. As a result, the presented approach reveals a notably higher data quality and offers more analysis potential by enabling phase-specific definitions of views on data and specification of semantic checks on CDM level. Several use cases from aerospace engineering have been analyzed and the improvements in areas such as inconsistency detection, knowledge derivation, and guided system modeling are highlighted in this paper.", "The development of increasingly complex systems with improved quality levels becomes more and more challenging. Engineering data frameworks with integrated system models have been developed to manage such systems. This paper presents the experiences that have been made in digital systems engineer­ing in the aerospace domain and focuses on the roadmap that has been taken to establish a knowledge engineering framework. While working with first versions of these tools, it became obvious that an engineering framework reflecting all aspects of an engineering data object was required. In addition, data analytics and technologies used to check data consistency became increasingly relevant. As a consequence, semantically rich data models expressed by ontologies come into focus of forming the engineering framework baseline in conjunction with related technologies such as reasoning, error avoidance based on data analytics, and knowledge-driven engineering environments.", "", "The engineering of complex systems is more and more supported through computer-based models that rely on a comprehensive specification of their underlying data. This paper reflects on extensive industrial experience with a sophisticated application of conceptual data modeling, addressing requirements as they arise in the context of space systems engineering. For this purpose identified needs for conceptual data modeling in the scope of Model-Based Systems Engineering are formulated. Established and evolving approaches and technologies for building conceptual data models are characterized, analyzed, and discussed regarding their suitability for modeling engineering data. Based on this analysis of the state of the art, recommendations for the future evolution of conceptual data modeling are formulated.", "Ontologies have been increasingly used over the past few decades in a wide range of application domains spanning both academic and industrial communities. As ontologies are the cornerstone of the Semantic Web, the technologies developed in this context, including ontology languages, specialized databases and query languages, have become widely used. However, the expressiveness of the proposed ontology languages does not always cover the needs of specific domains. For instance, engineering is a domain for which the LIAS laboratory has proposed dedicated solutions with a worldwide recognition. The underlying assumptions made in the context of the Semantic Web, an open and distributed environment, do not apply to the controlled environments of our projects where the correctness and completeness of modeling can be guaranteed to a certain degree. As a consequence, we have developed over the last decades a specialized standard ontology language named PLIB associated with the OntoDB OntoQL platform to manage ontological engineering data within a database. The goal of this paper is threefold: (1) to share our experience in manipulating ontologies in the engineering domain by describing their specificities and constraints; (2) to define a comprehensive classification of ontologies with respect to three main research communities: Artificial Intelligence, Databases and Natural Language Processing and (3) to present a persistent solution, called OntoDB, for managing extremely large semantic data sets associated with an ontological query language, called OntoQL. These objectives are illustrated by several examples that show the effectiveness and interest of our propositions in several industrial projects in different domains including vehicle manufacturing and CO (_2 ) storage.", "This paper presents the design and usage of a language for Conceptual Data Modeling in Model-based Systems Engineering. Based on an existing analysis of presently employed data modeling languages, a new conceptual data modeling language is defined that brings together characteristic features from software engineering languages, features from languages classically employed for knowledge engineering, as well as entirely newly developed functional aspects. This language has been applied to model a spacecraft as an example, demonstrating its utility for developing complex, multidisciplinary systems in the scope of Model-based Space Systems Engineering.", "Model-based Systems Engineering requires an intuitive semantically strong data model to enable precise data specification and provide the foundation for fruitful data analyses during data evolution. This paper presents an approach to use the Web Ontology Language (OWL) for specifying a Conceptual Data Model (CDM) being transformed into a format understandable by the Eclipse Modeling Framework (EMF) to profit from powerful data handling and knowledge management functions during runtime. Coalescing OWL with EMF brings up the strength of both approaches leading to considerably better data models with less failure potential and reveal notably more analysis potential by using a common data model specification. This approach also enables the direct application of reasoning functionality for automatic inference of several pieces of knowledge and automatic checks as illustrated by examples from aerospace industry.", "In model-based systems engineering a model specifying the system’s design is shared across a variety of disciplines and used to ensure the consistency and quality of the overall design. Existing implementations for describing these system models exhibit a number of shortcomings regarding their approach to data management. In this emerging applications paper, we present the application of an ontology for space system design that provides increased semantic soundness of the underlying standardized data specification, enables reasoners to identify problems in the system, and allows the application of operational knowledge collected over past projects to the system to be designed. Based on a qualitative evaluation driven by data derived from an actual satellite design project, a reflection on the applicability of ontologies in the overall model-based systems engineering approach is pursued.", "" ] }
1905.12548
2947425643
In this paper, we introduce a system to collect product information from manufacturers and make it available in tools that are used for concurrent design of spacecraft. The planning of a spacecraft needs experts from different disciplines, like propulsion, power, and thermal. Since these different disciplines rely on each other there is a high need for communication between them, which is often realized by a Model-Based Systems Engineering (MBSE) process and corresponding tools. We show by comparison that the product information provided by manufacturers often does not match the information needed by MBSE tools on a syntactic or semantic level. The information from manufacturers is also currently not available in machine-readable formats. Afterwards, we present a prototype of a system that makes product information from manufacturers directly available in MBSE tools, in a machine-readable way.
PDF data sheets are meant to describe a product technically but there is no standard regarding the syntax or semantic of this description. There are several approaches to extract (semantic) information from data sheets, e.g. by @cite_21 , @cite_29 , @cite_12 , and @cite_0 , but we do not know of an accessible tool that performs that task reliably.
{ "cite_N": [ "@cite_0", "@cite_29", "@cite_21", "@cite_12" ], "mid": [ "2896158279", "1577990588", "", "2281682023" ], "abstract": [ "Selection of the right space parts is an essential step during the design of complex engineering systems and requires information that is typically embedded in unstructured documents like datasheets, Interface Control Documents (ICDs) and technical manuals. Satsearch (https: satsearch.co) aims to consolidate global space supply chain information within a single platform, by converting unstructured datasheets into machine-readable, human-readable, electronic datasheets (EDS). After satsearch’s initial, manual efforts at generating EDS from source datasheets for space parts, they realized that the process is not scalable. A possible solution is to employ knowledge base information extraction systems. The Design Engineering Assistant (DEA) team from the University of Strathclyde is currently working on the automation of the extraction of information from unstructured documents (e.g. textbooks, reports, datasheets, research papers, etc.) through the development of an expert system. This paper summarizes the approach and outcomes of a feasibility study for the DEA, assessing benefits and obstacles for the implementation of a fully-automated information extraction process, focusing at this stage only on datasheets for space parts for preliminary mission design.", "In the context of the Pangea project at IBM, we needed to design an information extraction module in order to extract some information from datasheets. Contrary to several information extraction systems based on some machine learning techniques that need some linguistic parsing of the documents, we propose an hybrid approach based on association rules mining and decision tree learning that does not require any linguistic processing. The system may be parameterized in various ways that influence the efficiency of the information extraction rules we discovered. The experiments show the system does not need a large training set to perform well.", "", "The development of information retrieval and extraction systems is still a challenging task. The occurrence of natural language limits the application of existing approaches. Therefore the approach of a new framework which combines natural language processing and semantic web technology is discussed." ] }
1905.12260
2947018532
There has been significant interest recently in learning multilingual word embeddings -- in which semantically similar words across languages have similar embeddings. State-of-the-art approaches have relied on expensive labeled data, which is unavailable for low-resource languages, or have involved post-hoc unification of monolingual embeddings. In the present paper, we investigate the efficacy of multilingual embeddings learned from weakly-supervised image-text data. In particular, we propose methods for learning multilingual embeddings using image-text data, by enforcing similarity between the representations of the image and that of the text. Our experiments reveal that even without using any expensive labeled data, a bag-of-words-based embedding model trained on image-text data achieves performance comparable to the state-of-the-art on crosslingual semantic similarity tasks.
Most work on producing multilingual embeddings has relied on crosslingual human-labeled data, such as bilingual lexicons @cite_3 @cite_7 @cite_15 @cite_4 or parallel aligned corpora @cite_1 @cite_7 @cite_16 @cite_26 . These works are also largely bilingual due to either limitations of methods or the requirement for data that exists only for a few language pairs. Bilingual embeddings are less desirable because they do not leverage the relevant resources of other languages. For example, in learning bilingual embeddings for English and French, it may be useful to leverage resources in Spanish, since French and Spanish are closely related. Bilingual embeddings are also limited in their applications to just one language pair.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_7", "@cite_1", "@cite_3", "@cite_15", "@cite_16" ], "mid": [ "2252212383", "2294774419", "", "2251033195", "2126725946", "", "2251765408" ], "abstract": [ "We propose a simple yet effective approach to learning bilingual word embeddings (BWEs) from non-parallel document-aligned data (based on the omnipresent skip-gram model), and its application to bilingual lexicon induction (BLI). We demonstrate the utility of the induced BWEs in the BLI task by reporting on benchmarking BLI datasets for three language pairs: (1) We show that our BWE-based BLI models significantly outperform the MuPTM-based and context-counting models in this setting, and obtain the best reported BLI results for all three tested language pairs; (2) We also show that our BWE-based BLI models outperform other BLI models based on recently proposed BWEs that require parallel data for bilingual training.", "Word embedding has been found to be highly powerful to translate words from one language to another by a simple linear transform. However, we found some inconsistence among the objective functions of the embedding and the transform learning, as well as the distance measurement. This paper proposes a solution which normalizes the word vectors on a hypersphere and constrains the linear transform as an orthogonal transform. The experimental results confirmed that the proposed solution can offer better performance on a word similarity task and an English-toSpanish word translation task.", "", "Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.", "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.", "", "Recent work in learning bilingual representations tend to tailor towards achieving good performance on bilingual tasks, most often the crosslingual document classification (CLDC) evaluation, but to the detriment of preserving clustering structures of word representations monolingually. In this work, we propose a joint model to learn word representations from scratch that utilizes both the context coocurrence information through the monolingual component and the meaning equivalent signals from the bilingual constraint. Specifically, we extend the recently popular skipgram model to learn high quality bilingual representations efficiently. Our learned embeddings achieve a new state-of-the-art accuracy of 80.3 for the German to English CLDC task and a highly competitive performance of 90.7 for the other classification direction. At the same time, our models outperform best embeddings from past bilingual representation work by a large margin in the monolingual word similarity evaluation. 1" ] }
1905.12260
2947018532
There has been significant interest recently in learning multilingual word embeddings -- in which semantically similar words across languages have similar embeddings. State-of-the-art approaches have relied on expensive labeled data, which is unavailable for low-resource languages, or have involved post-hoc unification of monolingual embeddings. In the present paper, we investigate the efficacy of multilingual embeddings learned from weakly-supervised image-text data. In particular, we propose methods for learning multilingual embeddings using image-text data, by enforcing similarity between the representations of the image and that of the text. Our experiments reveal that even without using any expensive labeled data, a bag-of-words-based embedding model trained on image-text data achieves performance comparable to the state-of-the-art on crosslingual semantic similarity tasks.
There has been some recent work on reducing the amount of human-labeled data required to learn multilingual embeddings, enabling work on low-resource languages @cite_23 @cite_14 @cite_5 . These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space, exploiting the structural similarity of monolingual embedding spaces first noticed by . As discussed in Section , post-hoc alignment of monolingual embeddings is inherently suboptimal. For example, and use human-labeled data, along with shared surface forms across languages, to learn an alignment in the bilingual setting. build on this for the multilingual setting, using no human-labeled data and instead using an adversarial approach to maximize alignment between monolingual embedding spaces given their structural similarities. This method (MUSE) outperforms previous approaches and represents the state-of-the-art. We compare it to our methods in Section . =-1
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_23" ], "mid": [ "2762484717", "2741602058", "2952190837" ], "abstract": [ "State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation. Our code, embeddings and dictionaries are publicly available.", "", "Usually bilingual word vectors are trained \"online\". showed they can also be found \"offline\", whereby two pre-trained embeddings are aligned with a linear transformation, using dictionaries compiled from expert knowledge. In this work, we prove that the linear transformation between two spaces should be orthogonal. This transformation can be obtained using the singular value decomposition. We introduce a novel \"inverted softmax\" for identifying translation pairs, with which we improve the precision @1 of Mikolov's original mapping from 34 to 43 , when translating a test set composed of both common and rare English words into Italian. Orthogonal transformations are more robust to noise, enabling us to learn the transformation without expert bilingual signal by constructing a \"pseudo-dictionary\" from the identical character strings which appear in both languages, achieving 40 precision on the same test set. Finally, we extend our method to retrieve the true translations of English sentences from a corpus of 200k Italian sentences with a precision @1 of 68 ." ] }
1905.12430
2947058714
Using proof techniques involving @math covering numbers, we show generalisation error bounds for deep learning with two main improvements over the state of the art. First, our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the @math -norm of the weight matrices, while previous bounds exhibit at least a square-root dependence on the number of classes in this case. Second, we adapt the Rademacher analysis of DNNs to incorporate weight sharing---a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Finally we provide a few further technical improvements, including improving the width dependence from before to after pooling. We also examine our bound's behaviour on artificial data.
@cite_29 , building on earlier work @cite_23 , the authors provide bounds on the Rademacher complexity of the function class consisting of fully connected neural networks of a given fixed width and length, with fixed constraints on the Frobenius norms of the weight matrices, which scale as the product @math of the Frobenius norms @math of the weight matrices of each layer.
{ "cite_N": [ "@cite_29", "@cite_23" ], "mid": [ "2963038205", "2962857907" ], "abstract": [ "We study the sample complexity of learning neural networks, by providing new bounds on their Rademacher complexity assuming norm constraints on the parameter matrix of each layer. Compared to previous work, these bounds have improved dependence on the network depth, and under some additional assumptions, are fully independent of the network size (both depth and width). These results are derived using some novel techniques, which may be of independent interest.", "We investigate the capacity, convexity and characterization of a general family of norm-constrained feed-forward networks." ] }
1905.12430
2947058714
Using proof techniques involving @math covering numbers, we show generalisation error bounds for deep learning with two main improvements over the state of the art. First, our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the @math -norm of the weight matrices, while previous bounds exhibit at least a square-root dependence on the number of classes in this case. Second, we adapt the Rademacher analysis of DNNs to incorporate weight sharing---a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Finally we provide a few further technical improvements, including improving the width dependence from before to after pooling. We also examine our bound's behaviour on artificial data.
where @math is the maximum width of the network, and Here @math denotes the spectral norm, and for any matrix @math , @math . Around the same time as the above result appeared, the authors in @cite_24 used a PAC Bayesian approach to prove an analogous result with @math replaced by the quantity below Note that the result using formula can also be derived from expressing in terms of @math norms and using Jensen's inequality :
{ "cite_N": [ "@cite_24" ], "mid": [ "2963285844" ], "abstract": [ "We present a generalization bound for feedforward neural networks in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. The generalization bound is derived using a PAC-Bayes analysis." ] }
1905.12365
2947648124
In this paper we propose an approach for monocular 3D object detection from a single RGB image, which leverages a novel disentangling transformation for 2D and 3D detection losses and a novel, self-supervised confidence score for 3D bounding boxes. Our proposed loss disentanglement has the twofold advantage of simplifying the training dynamics in the presence of losses with complex interactions of parameters, and sidestepping the issue of balancing independent regression terms. Our solution overcomes these issues by isolating the contribution made by groups of parameters to a given loss, without changing its nature. We further apply loss disentanglement to another novel, signed Intersection-over-Union criterion-driven loss for improving 2D detection results. Besides our methodological innovations, we critically review the AP metric used in KITTI3D, which emerged as the most important dataset for comparing 3D detection results. We identify and resolve a flaw in the 11-point interpolated AP metric, affecting all previously published detection results and particularly biases the results of monocular 3D detection. We provide extensive experimental evaluations and ablation studies on the KITTI3D and nuScenes datasets, setting new state-of-the-art results on object category car by large margins.
3D-RCNN @cite_17 exploits the idea of using inverse graphics for instance-level, amodal 3D shape and pose estimation of all object instances per image. They propose a differentiable Render-and-Compare loss, exploiting available 2D annotations in existing datasets for guiding optimization of 3D object shape and pose. @cite_21 , the recognition task is tackled by jointly reasoning about the 3D shape of multiple objects. Deep-MANTA @cite_3 uses 3D CAD models and annotated 3D parts in a coarse-to-fine localization process. The work in @cite_5 encodes shape priors using keypoints for recovering the 3D pose and shape of a query object. In Mono3D++ @cite_26 , the 3D shape and pose for cars is provided by using a morphable wireframe, and it optimizes projection consistency between generated 3D hypotheses and corresponding, 2D pseudo-measurements.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_3", "@cite_5", "@cite_17" ], "mid": [ "2904215936", "2071042563", "2951087142", "2963667201", "" ], "abstract": [ "We present a method to infer 3D pose and shape of vehicles from a single image. To tackle this ill-posed problem, we optimize two-scale projection consistency between the generated 3D hypotheses and their 2D pseudo-measurements. Specifically, we use a morphable wireframe model to generate a fine-scaled representation of vehicle shape and pose. To reduce its sensitivity to 2D landmarks, we jointly model the 3D bounding box as a coarse representation which improves robustness. We also integrate three task priors, including unsupervised monocular depth, a ground plane constraint as well as vehicle shape priors, with forward projection errors into an overall energy function.", "Current systems for scene understanding typically represent objects as 2D or 3D bounding boxes. While these representations have proven robust in a variety of applications, they provide only coarse approximations to the true 2D and 3D extent of objects. As a result, object-object interactions, such as occlusions or ground-plane contact, can be represented only superficially. In this paper, we approach the problem of scene understanding from the perspective of 3D shape modeling, and design a 3D scene representation that reasons jointly about the 3D shape of multiple objects. This representation allows to express 3D geometry and occlusion on the fine detail level of individual vertices of 3D wireframe models, and makes it possible to treat dependencies between objects, such as occlusion reasoning, in a deterministic way. In our experiments, we demonstrate the benefit of jointly estimating the 3D shape of multiple objects in a scene over working with coarse boxes, on the recently proposed KITTI dataset of realistic street scenes.", "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the network's outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "We present an approach for reconstructing vehicles from a single (RGB) image, in the context of autonomous driving. Though the problem appears to be ill-posed, we demonstrate that prior knowledge about how 3D shapes of vehicles project to an image can be used to reason about the reverse process, i.e., how shapes (back-)project from 2D to 3D. We encode this knowledge in shape priors, which are learnt over a small keypoint-annotated dataset. We then formulate a shape-aware adjustment problem that uses the learnt shape priors to recover the 3D pose and shape of a query object from an image. For shape representation and inference, we leverage recent successes of Convolutional Neural Networks (CNNs) for the task of object and keypoint localization, and train a novel cascaded fully-convolutional architecture to localize vehicle keypoints in images. The shape-aware adjustment then robustly recovers shape (3D locations of the detected keypoints) while simultaneously filling in occluded keypoints. To tackle estimation errors incurred due to erroneously detected keypoints, we use an Iteratively Re-weighted Least Squares (IRLS) scheme for robust optimization, and as a by-product characterize noise models for each predicted keypoint. We evaluate our approach on autonomous driving benchmarks, and present superior results to existing monocular, as well as stereo approaches.", "" ] }
1905.12365
2947648124
In this paper we propose an approach for monocular 3D object detection from a single RGB image, which leverages a novel disentangling transformation for 2D and 3D detection losses and a novel, self-supervised confidence score for 3D bounding boxes. Our proposed loss disentanglement has the twofold advantage of simplifying the training dynamics in the presence of losses with complex interactions of parameters, and sidestepping the issue of balancing independent regression terms. Our solution overcomes these issues by isolating the contribution made by groups of parameters to a given loss, without changing its nature. We further apply loss disentanglement to another novel, signed Intersection-over-Union criterion-driven loss for improving 2D detection results. Besides our methodological innovations, we critically review the AP metric used in KITTI3D, which emerged as the most important dataset for comparing 3D detection results. We identify and resolve a flaw in the 11-point interpolated AP metric, affecting all previously published detection results and particularly biases the results of monocular 3D detection. We provide extensive experimental evaluations and ablation studies on the KITTI3D and nuScenes datasets, setting new state-of-the-art results on object category car by large margins.
3DOP @cite_35 exploits stereo images and prior knowledge about the scene to directly reason in 3D. Stereo R-CNN @cite_18 tackles 3D object detection by exploiting stereo imagery and produces stereo boxes, keypoints, dimensions and viewpoint angles, summarized in a learned 3D box estimation module. In MV3D @cite_29 , a sensor-fusion approach for LIDAR and RGB images is presented, approaching 3D object proposal generation and multi-view feature fusion via individual sub-networks. Conversely, Frustrum-PointNet @cite_13 directly operates on LIDAR point clouds and aligns candidate points provided from corresponding 2D detections for estimating the final, amodal 3D bounding boxes. PointRCNN @cite_15 describes a 2-stage framework where the first stage provides bottom-up 3D proposals and the second stage refines them in canonical coordinates. RoarNet @cite_7 applies a 2D detector to first estimate 3D poses of objects from a monocular image before processing corresponding 3D point clouds to obtain the final 3D bounding boxes.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_7", "@cite_29", "@cite_15", "@cite_13" ], "mid": [ "2184393491", "2962807143", "2899807236", "2950952351", "2949708697", "2769205412" ], "abstract": [ "The goal of this paper is to generate high-quality 3D object proposals in the context of autonomous driving. Our method exploits stereo imagery to place proposals in the form of 3D bounding boxes. We formulate the problem as minimizing an energy function encoding object size priors, ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. Combined with convolutional neural net (CNN) scoring, our approach outperforms all existing results on all three KITTI object classes.", "", "We present RoarNet, a new approach for 3D object detection from a 2D image and 3D Lidar point clouds. Based on two-stage object detection framework with PointNet as our backbone network, we suggest several novel ideas to improve 3D object detection performance. The first part of our method, RoarNet_2D, estimates the 3D poses of objects from a monocular image, which approximates where to examine further, and derives multiple candidates that are geometrically feasible. This step significantly narrows down feasible 3D regions, which otherwise requires demanding processing of 3D point clouds in a huge search space. Then the second part, RoarNet_3D, takes the candidate regions and conducts in-depth inferences to conclude final poses in a recursive manner. Inspired by PointNet, RoarNet_3D processes 3D point clouds directly without any loss of data, leading to precise detection. We evaluate our method in KITTI, a 3D object detection benchmark. Our result shows that RoarNet has superior performance to state-of-the-art methods that are publicly available. Remarkably, RoarNet also outperforms state-of-the-art methods even in settings where Lidar and camera are not time synchronized, which is practically important for actual driving environments. RoarNet is implemented in Tensorflow and publicly available with pre-trained models.", "This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.", "", "In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability." ] }
1905.12226
2947631309
(MIL) is concerned with learning from bags of instances, where only bag labels are given and instance labels are unknown. Existent approaches in this field were mainly designed for the bag-level label prediction (predict labels for bags) but not the instance-level (predict labels for instances), with the task loss being only defined at the bag level. This restricts their application in many tasks, where the instance-level labels are more interested. In this paper, we propose a novel algorithm, whose loss is specifically defined at the instance level, to address instance-level label prediction in MIL. We prove that the loss of this algorithm can be unbiasedly and consistently estimated without using instance labels, under the i.i.d assumption. Empirical study validates the above statements and shows that the proposed algorithm can achieve superior instance-level and comparative bag-level performance, compared to state-of-the-art MIL methods. In addition, it shows that the proposed method can achieve similar results as the fully supervised model (trained with instance labels) for label prediction at the instance level.
, kotzias2015group in addition proposed a constrain on the instance-level label prediction based on similarity between instances. They encouraged label predictions to be close for similar instances. In the line of ultilizing instance similarity, some methods proposed to directly recognize key (positive) instances @cite_0 @cite_5 . The biggest difficulty of these similarity-based methods is that it is hard to design an appropriate distance measure between instances, especially for non-vectorized data like image and text.
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2163474322", "2115672776" ], "abstract": [ "We present a new multiple-instance (MI) learning technique (EM-DD) that combines EM with the diverse density (DD) algorithm. EM-DD is a general-purpose MI algorithm that can be applied with boolean or real-value labels and makes real-value predictions. On the boolean Musk benchmarks, the EM-DD algorithm without any tuning significantly outperforms all previous algorithms. EM-DD is relatively insensitive to the number of relevant attributes in the data set and scales up well to large bag sizes. Furthermore, EM-DD provides a new framework for MI learning, in which the MI problem is converted to a single-instance setting by using EM to estimate the instance responsible for the label of the bag.", "The goal of traditional multi-instance learning (MIL) is to predict the labels of the bags, whereas in many real applications, it is desirable to get the instance labels, especially the labels of key instances that trigger the bag labels, in addition to getting bag labels. Such a problem has been largely unexplored before. In this paper, we formulate the Key Instance Detection (KID) problem, and propose a voting framework (VF) solution to KID. The key of VF is to exploit the relationship among instances, represented by a citer kNN graph. This graph is dierent from commonly used nearest neighbor graphs, but is suitable for KID. Experiments validate the eectiveness of VF for KID. Additionally, VF also outperforms state-of-the-art MIL approaches on the performance of bag label prediction." ] }
1905.12220
2968510901
In a Web plagued by disappearing resources, Web archive collections provide a valuable means of preserving Web resources important to the study of past events ranging from elections to disease outbreaks. These archived collections start with seed URIs (Uniform Resource Identifiers) hand-selected by curators. Curators produce high quality seeds by removing non-relevant URIs and adding URIs from credible and authoritative sources, but this ability comes at a cost: it is time consuming to collect these seeds. Two main strategies adopted by curators for discovering seeds include scraping Web (e.g., Google) Search Engine Result Pages (SERPs) and social media (e.g., Twitter) SERPs. In this work, we studied three social media platforms in order to provide some insight on the characteristics of seeds generated from different sources. First, we developed a simple vocabulary for describing social media posts across different platforms. Second, we introduced a novel source for generating seeds from URIs in the threaded conversations of social media posts created by single or multiple users. Users on social media sites routinely create and share posts about news events consisting of hand-selected URIs of news stories, tweets, videos, etc. In this work, we call these posts micro-collections, whether shared on Reddit or Twitter, and we consider them as an important source for seeds. This is because, the effort taken to create micro-collections is an indication of editorial activity and a demonstration of domain expertise. Third, we generated 23,112 seed collections with text and hashtag queries from 449,347 social media posts from Reddit, Twitter, and Scoop.it. We collected in total 120,444 URIs from the conventional scraped SERP posts and micro-collections. We characterized the resultant seed collections across multiple dimensions including the distribution of URIs, precision, ages, diversity of webpages, etc. We showed that seeds generated by scraping SERPs had a higher median probability (0.63) of producing relevant URIs than micro-collections (0.5). However, micro-collections were more likely to produce seeds with a higher precision than conventional SERP collections for Twitter collections generated with hashtags. Also, micro-collections were more likely to produce older webpages and more non-HTML documents.
Not all collection building uses focused crawling. @cite_16 proposed a methodology for extracting sub-collections from Web archive collections focused on specific topics and events (called the ). The is defined as a collection of documents in a Web archive collected using a . Our research differs from in two major ways. First, Gossen proposes generating collections from within the Web archives, but we propose generating seeds from the live social Web. Second, Gossen proposed running an algorithm over a sub-collection specification on a Web archive to generate a sub-collection. This means the decision of whether a URI belongs in a sub-collection is encoded in the specification of an algorithm. However, in this work, we leverage the judgment of humans on social media.
{ "cite_N": [ "@cite_16" ], "mid": [ "2337316777" ], "abstract": [ "Web archives capture the history of the Web and are therefore an important source to study how societ al developments have been reflected on the Web. However, the large size of Web archives and their temporal nature pose many challenges to researchers interested in working with these collections. In this work, we describe the challenges of working with Web archives and propose the research methodology of extracting and studying sub-collections of the archive focused on specific topics and events. We discuss the opportunities and challenges of this approach and suggest a framework for creating sub-collections." ] }
1905.12220
2968510901
In a Web plagued by disappearing resources, Web archive collections provide a valuable means of preserving Web resources important to the study of past events ranging from elections to disease outbreaks. These archived collections start with seed URIs (Uniform Resource Identifiers) hand-selected by curators. Curators produce high quality seeds by removing non-relevant URIs and adding URIs from credible and authoritative sources, but this ability comes at a cost: it is time consuming to collect these seeds. Two main strategies adopted by curators for discovering seeds include scraping Web (e.g., Google) Search Engine Result Pages (SERPs) and social media (e.g., Twitter) SERPs. In this work, we studied three social media platforms in order to provide some insight on the characteristics of seeds generated from different sources. First, we developed a simple vocabulary for describing social media posts across different platforms. Second, we introduced a novel source for generating seeds from URIs in the threaded conversations of social media posts created by single or multiple users. Users on social media sites routinely create and share posts about news events consisting of hand-selected URIs of news stories, tweets, videos, etc. In this work, we call these posts micro-collections, whether shared on Reddit or Twitter, and we consider them as an important source for seeds. This is because, the effort taken to create micro-collections is an indication of editorial activity and a demonstration of domain expertise. Third, we generated 23,112 seed collections with text and hashtag queries from 449,347 social media posts from Reddit, Twitter, and Scoop.it. We collected in total 120,444 URIs from the conventional scraped SERP posts and micro-collections. We characterized the resultant seed collections across multiple dimensions including the distribution of URIs, precision, ages, diversity of webpages, etc. We showed that seeds generated by scraping SERPs had a higher median probability (0.63) of producing relevant URIs than micro-collections (0.5). However, micro-collections were more likely to produce seeds with a higher precision than conventional SERP collections for Twitter collections generated with hashtags. Also, micro-collections were more likely to produce older webpages and more non-HTML documents.
In a similar work, @cite_19 adapted some portions of the in a method to extract event-centric documents from Web archives based on a specialized focused extraction algorithm. They defined two broad kinds of events based on time: and . The goal of the event-centric extraction process is, given an event input and a Web archive, generate an interlinked collection of documents relevant to the input event that meet the . The differences of our research with Gossen's previous work @cite_16 transfer to this work. However, we adapted Gossen's categorization of events as either or , and we renamed to (Table ). Similar to , @cite_28 presented an approach for extracting event-centric sub-collection from Web Archives. Their method extracts documents not only related to the input event, but also documents describing related events (e.g., premises and consequences). 's method utilized Wikipedia pages as inputs to generate event-centric collections. In this work, however, we used Wikipedia references to generate our gold standard dataset.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_16" ], "mid": [ "2809161927", "2963844053", "2337316777" ], "abstract": [ "Web archives, such as the Internet Archive, preserve an unprecedented abundance of materials regarding major events and transformations in our society. In this paper, we present an approach for building event-centric sub-collections from such large archives, which includes not only the core documents related to the event itself but, even more importantly, documents describing related aspects (e.g., premises and consequences). This is achieved by identifying relevant concepts and entities from a knowledge base, and then detecting their mentions in documents, which are interpreted as indicators for relevance. We extensively evaluate our system on two diachronic corpora, the New York Times Corpus and the US Congressional Record; additionally, we test its performance on the TREC KBA Stream Corpus and on the TREC-CAR dataset, two publicly available large-scale web collections.", "Web archives are typically very broad in scope and extremely large in scale. This makes data analysis appear daunting, especially for non-computer scientists. These collections constitute an increasingly important source for researchers in the social sciences, the historical sciences and journalists interested in studying past events. However, there are currently no access methods that help users to efficiently access information, in particular about specific events, beyond the retrieval of individual disconnected documents. Therefore we propose a novel method to extract event-centric document collections from large scale Web archives. This method relies on a specialized focused extraction algorithm. Our experiments on the German Web archive (covering a time period of 19 years) demonstrate that our method enables the extraction of event-centric collections for different event types.", "Web archives capture the history of the Web and are therefore an important source to study how societ al developments have been reflected on the Web. However, the large size of Web archives and their temporal nature pose many challenges to researchers interested in working with these collections. In this work, we describe the challenges of working with Web archives and propose the research methodology of extracting and studying sub-collections of the archive focused on specific topics and events. We discuss the opportunities and challenges of this approach and suggest a framework for creating sub-collections." ] }
1905.12220
2968510901
In a Web plagued by disappearing resources, Web archive collections provide a valuable means of preserving Web resources important to the study of past events ranging from elections to disease outbreaks. These archived collections start with seed URIs (Uniform Resource Identifiers) hand-selected by curators. Curators produce high quality seeds by removing non-relevant URIs and adding URIs from credible and authoritative sources, but this ability comes at a cost: it is time consuming to collect these seeds. Two main strategies adopted by curators for discovering seeds include scraping Web (e.g., Google) Search Engine Result Pages (SERPs) and social media (e.g., Twitter) SERPs. In this work, we studied three social media platforms in order to provide some insight on the characteristics of seeds generated from different sources. First, we developed a simple vocabulary for describing social media posts across different platforms. Second, we introduced a novel source for generating seeds from URIs in the threaded conversations of social media posts created by single or multiple users. Users on social media sites routinely create and share posts about news events consisting of hand-selected URIs of news stories, tweets, videos, etc. In this work, we call these posts micro-collections, whether shared on Reddit or Twitter, and we consider them as an important source for seeds. This is because, the effort taken to create micro-collections is an indication of editorial activity and a demonstration of domain expertise. Third, we generated 23,112 seed collections with text and hashtag queries from 449,347 social media posts from Reddit, Twitter, and Scoop.it. We collected in total 120,444 URIs from the conventional scraped SERP posts and micro-collections. We characterized the resultant seed collections across multiple dimensions including the distribution of URIs, precision, ages, diversity of webpages, etc. We showed that seeds generated by scraping SERPs had a higher median probability (0.63) of producing relevant URIs than micro-collections (0.5). However, micro-collections were more likely to produce seeds with a higher precision than conventional SERP collections for Twitter collections generated with hashtags. Also, micro-collections were more likely to produce older webpages and more non-HTML documents.
Selecting good seeds is challenging and has not been extensively studied. Collection building researches often acknowledge the importance of selecting good seeds, and admit its link to the performance of their systems, but often they pay more attention on the mechanisms of building the collection, and not seed selection. The challenge of selecting good seeds is embodied in the idea that it is difficult to define good.'' This challenge is captured by Bergmark's statement @cite_6 : It is unclear what makes a good seed URL, but intuitively it should be rich in links, yet not too broad in scope.'' @cite_23 argued that the seed selection problem for Web crawlers is not a trivial, and proposed different seed selection strategies based on PageRank, number of outlinks, and website importance. They also showed that different seeds may result in collections that are considered good'' or bad.'' While there have been efforts made to automatically generate seeds, many of these methods (e.g., Prasath and "Ozt "urk @cite_41 ) target generating seeds for Web crawlers that build indexes for search engines, and not seeds for focused crawlers or Web archive collections.
{ "cite_N": [ "@cite_41", "@cite_23", "@cite_6" ], "mid": [ "156343458", "2133433261", "" ], "abstract": [ "This paper presents a potential seed selection algorithm for web crawlers using a gain - share scoring approach. Initially we consider a set of arbitrarily chosen tourism queries. Each query is given to the selected N commercial Search Engines (SEs); top msearch results for each SE are obtained, and each of these mresults is manually evaluated and assigned a relevance score. For each of m results, a gain - share score is computed using their hyperlinks structure across N ranked lists. Gain score of each link present in each of m results and a portion of the gain score is propagated to the share score of each of m results. This updated share scores of each of m results determine the potential set of seed URLs for web crawling. Experimental results on tourism related web data illustrate the effectiveness of the proposed seed selection algorithm.", "This paper identifies and explores the problem of seed selection in a web-scale crawler. We argue that seed selection is not a trivial but very important problem. Selecting proper seeds can increase the number of pages a crawler will discover, and can result in a collection with more good\" and less \"bad\" pages. Based on the analysis of the graph structure of the web, we propose several seed selection algorithms. Effectiveness of these algorithms is proved by our experimental results on real web data.", "" ] }
1905.12220
2968510901
In a Web plagued by disappearing resources, Web archive collections provide a valuable means of preserving Web resources important to the study of past events ranging from elections to disease outbreaks. These archived collections start with seed URIs (Uniform Resource Identifiers) hand-selected by curators. Curators produce high quality seeds by removing non-relevant URIs and adding URIs from credible and authoritative sources, but this ability comes at a cost: it is time consuming to collect these seeds. Two main strategies adopted by curators for discovering seeds include scraping Web (e.g., Google) Search Engine Result Pages (SERPs) and social media (e.g., Twitter) SERPs. In this work, we studied three social media platforms in order to provide some insight on the characteristics of seeds generated from different sources. First, we developed a simple vocabulary for describing social media posts across different platforms. Second, we introduced a novel source for generating seeds from URIs in the threaded conversations of social media posts created by single or multiple users. Users on social media sites routinely create and share posts about news events consisting of hand-selected URIs of news stories, tweets, videos, etc. In this work, we call these posts micro-collections, whether shared on Reddit or Twitter, and we consider them as an important source for seeds. This is because, the effort taken to create micro-collections is an indication of editorial activity and a demonstration of domain expertise. Third, we generated 23,112 seed collections with text and hashtag queries from 449,347 social media posts from Reddit, Twitter, and Scoop.it. We collected in total 120,444 URIs from the conventional scraped SERP posts and micro-collections. We characterized the resultant seed collections across multiple dimensions including the distribution of URIs, precision, ages, diversity of webpages, etc. We showed that seeds generated by scraping SERPs had a higher median probability (0.63) of producing relevant URIs than micro-collections (0.5). However, micro-collections were more likely to produce seeds with a higher precision than conventional SERP collections for Twitter collections generated with hashtags. Also, micro-collections were more likely to produce older webpages and more non-HTML documents.
@cite_42 proposed a customized method of generating seeds for focused crawlers based on user past Web usage information that captures the interests of the user. Since this method depends on historical use information, its performance is tied to the availability of such historical data, which might be lacking due to the absence of domain knowledge or privacy concerns. As part of the Crisis, Tragedy, and Recovery Network project, @cite_35 proposed using URIs found in tweet collections (generated with hashtags and keywords) as seeds to bootstrap Web archiving tasks quickly for sudden emergencies and disasters. Similarly, we consider extracting seeds from tweets, but expand the areas for extracting seeds beyond scraping Twitter SERPs. Additionally, we identify post classes of tweets as part of an effort to characterize the nature of seeds generated from different post classes (Table ). @cite_17 proposed extracting diverse seeds from tweets in a Twitter URI graph for the Web crawlers of digital libraries such as CiteSeerX. Even though their work does not target the generation of seeds for collections of stories and events, which is a focus of our work, the notion of diversity of seeds is adopted in our work (Section ).
{ "cite_N": [ "@cite_35", "@cite_42", "@cite_17" ], "mid": [ "1999544021", "2068410751", "2188130994" ], "abstract": [ "In the event of emergencies and disasters, massive amounts of web resources are generated and shared. Due to the rapidly changing nature of those resources, it is important to start archiving them as soon as a disaster occurs. This led us to develop a prototype system for constructing archives with minimum human intervention using the seed URLs extracted from tweet collections. We present the details of our prototype system. We applied it to five tweet collections that had been developed in advance, for evaluation. We also identify five categories of non- relevant files and conclude with a discussion of findings from the evaluation.", "Seed URLs selection for focused Web crawler intends to guide related and valuable information that meets a user's personal information requirement and provide more effective information retrieval. In this paper, we propose a seed URLs selection approach based on user-interest ontology. In order to enrich semantic query, we first intend to apply Formal Concept Analysis to construct user-interest concept lattice with user log profile. By using concept lattice merger, we construct the user-interest ontology which can describe the implicit concepts and relationships between them more appropriately for semantic representation and query match. On the other hand, we make full use of the user-interest ontology for extracting the user interest topic area and expanding user queries to receive the most related pages as seed URLs, which is an entrance of the focused crawler. In particular, we focus on how to refine the user topic area using the bipartite directed graph. The experiment proves that the user-interest ontology can be achieved effectively by merging concept lattices and that our proposed approach can select high quality seed URLs collection and improve the average precision of focused Web crawler.", "The last two decades have witnessed an exponential rise in web content from a plethora of domains, which has necessitated the use of domain-specific search engines. Diversity of crawled content is one of the crucial aspects of a domain-specific search engine. To a large extent, diversity is governed by the initial set of seed URLs. Most of the existing approaches rely on manual effort for seed selection. In this work we automate this process using URLs posted on Twitter. We propose an algorithm to get a set of diverse seed URLs from a Twitter URL graph. We compare the performance of our approach against the baseline zero similarity seed selection method and find that our approach beats the baseline by a significant margin." ] }
1905.12220
2968510901
In a Web plagued by disappearing resources, Web archive collections provide a valuable means of preserving Web resources important to the study of past events ranging from elections to disease outbreaks. These archived collections start with seed URIs (Uniform Resource Identifiers) hand-selected by curators. Curators produce high quality seeds by removing non-relevant URIs and adding URIs from credible and authoritative sources, but this ability comes at a cost: it is time consuming to collect these seeds. Two main strategies adopted by curators for discovering seeds include scraping Web (e.g., Google) Search Engine Result Pages (SERPs) and social media (e.g., Twitter) SERPs. In this work, we studied three social media platforms in order to provide some insight on the characteristics of seeds generated from different sources. First, we developed a simple vocabulary for describing social media posts across different platforms. Second, we introduced a novel source for generating seeds from URIs in the threaded conversations of social media posts created by single or multiple users. Users on social media sites routinely create and share posts about news events consisting of hand-selected URIs of news stories, tweets, videos, etc. In this work, we call these posts micro-collections, whether shared on Reddit or Twitter, and we consider them as an important source for seeds. This is because, the effort taken to create micro-collections is an indication of editorial activity and a demonstration of domain expertise. Third, we generated 23,112 seed collections with text and hashtag queries from 449,347 social media posts from Reddit, Twitter, and Scoop.it. We collected in total 120,444 URIs from the conventional scraped SERP posts and micro-collections. We characterized the resultant seed collections across multiple dimensions including the distribution of URIs, precision, ages, diversity of webpages, etc. We showed that seeds generated by scraping SERPs had a higher median probability (0.63) of producing relevant URIs than micro-collections (0.5). However, micro-collections were more likely to produce seeds with a higher precision than conventional SERP collections for Twitter collections generated with hashtags. Also, micro-collections were more likely to produce older webpages and more non-HTML documents.
In previous work @cite_9 , we showed that collections generated from social media sources such as Reddit, Storify, Twitter, and Wikipedia are similar to Archive-It collections across multiple dimensions such as the distribution of sources and topics, content and URI diversity, etc. These findings suggest that curators may consider extracting URIs from these sources in order to begin or augment collections about various news topics. Here, we adopt a subset of the dimensions for comparing collections. Similarly, in another previous work @cite_24 as part of an effort to understand the behavior of SERPs, a popular source for generating seeds, we investigated refinding'' news stories on the Google SERP by tracking the URIs returned from Google, everyday for over seven months. We discovered that the probability of finding the same URI of a news story diminished drastically after a week (0.01 -- 0.11). These findings suggest it becomes more difficult to find the same news story with the same query on the Google SERP. Therefore, collection building efforts that scrape SERPs are highly sensitive to the query issue dates.
{ "cite_N": [ "@cite_24", "@cite_9" ], "mid": [ "2803559548", "2820220568" ], "abstract": [ "Event-based collections are often started with a web search, but the search results you find on Day 1 may not be the same as those you find on Day 7. In this paper, we consider collections that originate from extracting URIs (Uniform Resource Identifiers) from Search Engine Result Pages (SERPs). Specifically, we seek to provide insight about the retrievability of URIs of news stories found on Google, and to answer two main questions: first, can one \"refind\" the same URI of a news story (for the same query) from Google after a given time? Second, what is the probability of finding a story on Google over a given period of time? To answer these questions, we issued seven queries to Google every day for over seven months (2017-05-25 to 2018-01-12) and collected links from the first five SERPs to generate seven collections for each query. The queries represent public interest stories: \"healthcare bill,\" \"manchester bombing,\" \"london terrorism,\" \"trump russia,\" \"travel ban,\" \"hurricane harvey,\" and \"hurricane irma.\" We tracked each URI in all collections over time to estimate the discoverability of URIs from the first five SERPs. Our results showed that the daily average rate at which stories were replaced on the default Google SERP ranged from 0.21 - 0.54, and a weekly rate of 0.39 - 0.79, suggesting the fast replacement of older stories by newer stories. The probability of finding the same URI of a news story after one day from the initial appearance on the SERP ranged from 0.34 - 0.44. After a week, the probability of finding the same news stories diminishes rapidly to 0.01 - 0.11. In addition to the reporting of these probabilities, we also provide two predictive models for estimating the probability of finding the URI of an arbitrary news story on SERPs as a function of time. The web archiving community considers link rot and content drift important reasons for collection building. Similarly, our findings suggest that due to the difficulty in retrieving the URIs of news stories from Google, collection building that originates from search engines should begin as soon as possible in order to capture the first stages of events, and should persist in order to capture the evolution of the events, because it becomes more difficult to find the same news stories with the same queries on Google, as time progresses.", "Human-generated collections of archived web pages are expensive to create, but provide a critical source of information for researchers studying historical events. Hand-selected collections of web pages about events shared by users on social media offer the opportunity for bootstrapping archived collections. We investigated if collections generated automatically and semi-automatically from social media sources such as Storify, Reddit, Twitter, and Wikipedia are similar to Archive-It human-generated collections. This is a challenging task because it requires comparing collections that may cater to different needs. It is also challenging to compare collections since there are many possible measures to use as a baseline for collection comparison: how does one narrow down this list to metrics that reflect if two collections are similar or dissimilar? We identified social media sources that may provide similar collections to Archive-It human-generated collections in two main steps. First, we explored the state of the art in collection comparison and defined a suite of seven measures (Collection Characterizing Suite - CCS) to describe the individual collections. Second, we calculated the distances between the CCS vectors of Archive-It collections and the CCS vectors of collections generated automatically and semi-automatically from social media sources, to identify social media collections most similar to Archive-It collections. The CCS distance comparison was done for three topics: \"Ebola Virus,\" \"Hurricane Harvey,\" and \"2016 Pulse Nightclub Shooting.\" Our results showed that social media sources such as Reddit, Storify, Twitter, and Wikipedia produce collections that are similar to Archive-It collections. Consequently, curators may consider extracting URIs from these sources in order to begin or augment collections about various news topics." ] }
1905.12498
2947527348
Image translation across different domains has attracted much attention in both machine learning and computer vision communities. Taking the translation from source domain @math to target domain @math as an example, existing algorithms mainly rely on two kinds of loss for training: One is the discrimination loss, which is used to differentiate images generated by the models and natural images; the other is the reconstruction loss, which measures the difference between an original image and the reconstructed version through @math translation. In this work, we introduce a new kind of loss, multi-path consistency loss, which evaluates the differences between direct translation @math and indirect translation @math with @math as an auxiliary domain, to regularize training. For multi-domain translation (at least, three) which focuses on building translation models between any two domains, at each training iteration, we randomly select three domains, set them respectively as the source, auxiliary and target domains, build the multi-path consistency loss and optimize the network. For two-domain translation, we need to introduce an additional auxiliary domain and construct the multi-path consistency loss. We conduct various experiments to demonstrate the effectiveness of our proposed methods, including face-to-face translation, paint-to-photo translation, and de-raining de-noising translation.
GAN @cite_4 was firstly proposed to generate images in an unsupervised manner. A GAN is made up of a generator and a discriminator. The generator maps a random noise to an image and the discriminator verifies whether the image is a natural one or a fake one. The training of GAN is formulated as a two-player minmax game. Various versions of GAN have been proposed to exploit its capability for different image generation tasks @cite_19 @cite_11 @cite_6 . InfoGAN @cite_25 learns to disentangle latent representations by maximizing the mutual information between a small subset of the latent variables and the observation. @cite_16 presented a series of deep convolutional generative networks (DCGANs) for high-quality image generation and unsupervised image classification tasks, which bridges the convolutional neural networks and unsupervised image generation together. SRGAN @cite_23 maps low-resolution images to high resolution images. @cite_21 proposed a general conditional GAN for image-to-image translation tasks, which could be used to solve label-to-street scene and aerial-to-map translation problems.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_6", "@cite_19", "@cite_23", "@cite_16", "@cite_25", "@cite_11" ], "mid": [ "", "2964287360", "2788901923", "2411541852", "2523714292", "2173520492", "2963226019", "2952010110" ], "abstract": [ "", "Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain conditioned on a given image in the target domain. It requires that the generated image should inherit some domain-specific features of the conditional image from the target domain. Therefore, changing the conditional image in the target domain will lead to diverse translation results for a fixed input image from the source domain, and therefore the conditional input image helps to control the translation results. We tackle this problem with unpaired data based on GANs and dual learning. We twist two conditional translation models (one translation from A domain to B domain, and the other one from B domain to A domain) together for inputs combination and reconstruction while preserving domain independent features. We carry out experiments on men's faces from-to women's faces translation and edges to shoes&bags translations. The results demonstrate the effectiveness of our proposed method.", "Restoring face images from distortions is important in face recognition applications and is challenged by multiple scale issues, which is still not well-solved in research area. In this paper, we present a Sequential Gating Ensemble Network (SGEN) for multi-scale face restoration issue. We first employ the principle of ensemble learning into SGEN architecture design to reinforce predictive performance of the network. The SGEN aggregates multi-level base-encoders and base-decoders into the network, which enables the network to contain multiple scales of receptive field. Instead of combining these base-en decoders directly with non-sequential operations, the SGEN takes base-en decoders from different levels as sequential data. Specifically, the SGEN learns to sequentially extract high level information from base-encoders in bottom-up manner and restore low level information from base-decoders in top-down manner. Besides, we propose to realize bottom-up and top-down information combination and selection with Sequential Gating Unit (SGU). The SGU sequentially takes two inputs from different levels and decides the output based on one active input. Experiment results demonstrate that our SGEN is more effective at multi-scale human face restoration with more image details and less noise than state-of-the-art image restoration models. By using adversarial training, SGEN also produces more visually preferred results than other models through subjective evaluation.", "We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.", "In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations. A representation discriminator is introduced at each feature hierarchy to encourage the representation manifold of the generator to align with that of the bottom-up discriminative network, leveraging the powerful discriminative representations to guide the generative model. In addition, we introduce a conditional loss that encourages the use of conditional information from the layer above, and a novel entropy loss that maximizes a variational lower bound on the conditional entropy of generator outputs. We first train each stack independently, and then train the whole model end-to-end. Unlike the original GAN that uses a single noise vector to represent all the variations, our SGAN decomposes variations into multiple levels and gradually resolves uncertainties in the top-down generative process. Based on visual inspection, Inception scores and visual Turing test, we demonstrate that SGAN is able to generate images of much higher quality than GANs without stacking." ] }
1905.12207
2947576584
We study deep neural networks with polynomial activations, particularly their expressive power. For a fixed architecture and activation degree, a polynomial neural network defines an algebraic map from weights to polynomials. The image of this map is the functional space associated to the network, and it is an irreducible algebraic variety upon taking closure. This paper proposes the dimension of this variety as a precise measure of the expressive power of polynomial neural networks. We obtain several theoretical results regarding this dimension as a function of architecture, including an exact formula for high activation degrees, as well as upper and lower bounds on layer widths in order for deep polynomials networks to fill the ambient functional space. We also present computational evidence that it is profitable in terms of expressiveness for layer widths to increase monotonically and then decrease monotonically. Finally, we link our study to favorable optimization properties when training weights, and we draw intriguing connections with tensor and polynomial decompositions.
The study of the expressive power of neural networks dates back to seminal work on the universality of networks as function approximators @cite_15 @cite_4 . More recently, there has been research supporting the hypothesis of depth efficiency'', , the fact that deep networks can approximate functions more efficiently than shallow networks @cite_27 @cite_10 @cite_9 @cite_25 . Our paper differs from this line of work, in that we do not emphasize approximation properties, but rather the study of the functions that can be expressed using a network.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_27", "@cite_15", "@cite_10", "@cite_25" ], "mid": [ "2137983211", "2962845550", "2141473882", "2103496339", "1544557312", "2964290344" ], "abstract": [ "Abstract This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.", "", "We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning.", "In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.", "Sum Product Networks (SPNs) are a recently developed class of deep generative models which compute their associated unnormalized density functions using a special type of arithmetic circuit. When certain sufficient conditions, called the decomposability and completeness conditions (or \"D&C\" conditions), are imposed on the structure of these circuits, marginal densities and other useful quantities, which are typically intractable for other deep generative models, can be computed by what amounts to a single evaluation of the network (which is a property known as \"validity\"). However, the effect that the D&C conditions have on the capabilities of D&C SPNs is not well understood. In this work we analyze the D&C conditions, expose the various connections that D&C SPNs have with multilinear arithmetic circuits, and consider the question of how well they can capture various distributions as a function of their size and depth. Among our various contributions is a result which establishes the existence of a relatively simple distribution with fully tractable marginal densities which cannot be efficiently captured by D&C SPNs of any depth, but which can be efficiently captured by various other deep generative models. We also show that with each additional layer of depth permitted, the set of distributions which can be efficiently captured by D&C SPNs grows in size. This kind of \"depth hierarchy\" property has been widely conjectured to hold for various deep models, but has never been proven for any of them. Some of our other contributions include a new characterization of the D&C conditions as sufficient and necessary ones for a slightly strengthened notion of validity, and various state-machine characterizations of the types of computations that can be performed efficiently by D&C SPNs.", "Convolutional rectifier networks, i.e. convolutional neural networks with rectified linear activation and max or average pooling, are the cornerstone of modern deep learning. However, despite their wide use and success, our theoretical understanding of the expressive properties that drive these networks is partial at best. On the other hand, we have a much firmer grasp of these issues in the world of arithmetic circuits. Specifically, it is known that convolutional arithmetic circuits possess the property of \"complete depth efficiency\", meaning that besides a negligible set, all functions realizable by a deep network of polynomial size, require exponential size in order to be realized (or approximated) by a shallow network. In this paper we describe a construction based on generalized tensor decompositions, that transforms convolutional arithmetic circuits into convolutional rectifier networks. We then use mathematical tools available from the world of arithmetic circuits to prove new results. First, we show that convolutional rectifier networks are universal with max pooling but not with average pooling. Second, and more importantly, we show that depth efficiency is weaker with convolutional rectifier networks than it is with convolutional arithmetic circuits. This leads us to believe that developing effective methods for training convolutional arithmetic circuits, thereby fulfilling their expressive potential, may give rise to a deep learning architecture that is provably superior to convolutional rectifier networks but has so far been overlooked by practitioners." ] }
1905.12207
2947576584
We study deep neural networks with polynomial activations, particularly their expressive power. For a fixed architecture and activation degree, a polynomial neural network defines an algebraic map from weights to polynomials. The image of this map is the functional space associated to the network, and it is an irreducible algebraic variety upon taking closure. This paper proposes the dimension of this variety as a precise measure of the expressive power of polynomial neural networks. We obtain several theoretical results regarding this dimension as a function of architecture, including an exact formula for high activation degrees, as well as upper and lower bounds on layer widths in order for deep polynomials networks to fill the ambient functional space. We also present computational evidence that it is profitable in terms of expressiveness for layer widths to increase monotonically and then decrease monotonically. Finally, we link our study to favorable optimization properties when training weights, and we draw intriguing connections with tensor and polynomial decompositions.
Most of the aforementioned studies make strong hypotheses on the network architecture. In particular, @cite_27 @cite_10 focus on , or @cite_5 . These are networks composed of units that compute either the product or a weighted sum of their inputs. @cite_9 , the authors introduce a model of . This is a particular class of arithmetic circuits that includes networks with layers of 1D convolutions and product pooling. This model does not allow for non-linear activations (beside the product pooling), although the follow-up paper @cite_25 extends some results to ReLU activations with sum pooling. Interestingly, these networks are related to Hierarchical Tucker (HT) decomposition of tensors.
{ "cite_N": [ "@cite_9", "@cite_27", "@cite_5", "@cite_10", "@cite_25" ], "mid": [ "2962845550", "2141473882", "2949869425", "1544557312", "2964290344" ], "abstract": [ "", "We investigate the representational power of sum-product networks (computation networks analogous to neural networks, but whose individual units compute either products or weighted sums), through a theoretical analysis that compares deep (multiple hidden layers) vs. shallow (one hidden layer) architectures. We prove there exist families of functions that can be represented much more efficiently with a deep network than with a shallow one, i.e. with substantially fewer hidden units. Such results were not available until now, and contribute to motivate recent research involving learning of deep sum-product networks, and more generally motivate research in Deep Learning.", "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs). SPNs are directed acyclic graphs with variables as leaves, sums and products as internal nodes, and weighted edges. We show that if an SPN is complete and consistent it represents the partition function and all marginals of some graphical model, and give semantics to its nodes. Essentially all tractable graphical models can be cast as SPNs, but SPNs are also strictly more general. We then propose learning algorithms for SPNs, based on backpropagation and EM. Experiments show that inference and learning with SPNs can be both faster and more accurate than with standard deep networks. For example, SPNs perform image completion better than state-of-the-art deep networks for this task. SPNs also have intriguing potential connections to the architecture of the cortex.", "Sum Product Networks (SPNs) are a recently developed class of deep generative models which compute their associated unnormalized density functions using a special type of arithmetic circuit. When certain sufficient conditions, called the decomposability and completeness conditions (or \"D&C\" conditions), are imposed on the structure of these circuits, marginal densities and other useful quantities, which are typically intractable for other deep generative models, can be computed by what amounts to a single evaluation of the network (which is a property known as \"validity\"). However, the effect that the D&C conditions have on the capabilities of D&C SPNs is not well understood. In this work we analyze the D&C conditions, expose the various connections that D&C SPNs have with multilinear arithmetic circuits, and consider the question of how well they can capture various distributions as a function of their size and depth. Among our various contributions is a result which establishes the existence of a relatively simple distribution with fully tractable marginal densities which cannot be efficiently captured by D&C SPNs of any depth, but which can be efficiently captured by various other deep generative models. We also show that with each additional layer of depth permitted, the set of distributions which can be efficiently captured by D&C SPNs grows in size. This kind of \"depth hierarchy\" property has been widely conjectured to hold for various deep models, but has never been proven for any of them. Some of our other contributions include a new characterization of the D&C conditions as sufficient and necessary ones for a slightly strengthened notion of validity, and various state-machine characterizations of the types of computations that can be performed efficiently by D&C SPNs.", "Convolutional rectifier networks, i.e. convolutional neural networks with rectified linear activation and max or average pooling, are the cornerstone of modern deep learning. However, despite their wide use and success, our theoretical understanding of the expressive properties that drive these networks is partial at best. On the other hand, we have a much firmer grasp of these issues in the world of arithmetic circuits. Specifically, it is known that convolutional arithmetic circuits possess the property of \"complete depth efficiency\", meaning that besides a negligible set, all functions realizable by a deep network of polynomial size, require exponential size in order to be realized (or approximated) by a shallow network. In this paper we describe a construction based on generalized tensor decompositions, that transforms convolutional arithmetic circuits into convolutional rectifier networks. We then use mathematical tools available from the world of arithmetic circuits to prove new results. First, we show that convolutional rectifier networks are universal with max pooling but not with average pooling. Second, and more importantly, we show that depth efficiency is weaker with convolutional rectifier networks than it is with convolutional arithmetic circuits. This leads us to believe that developing effective methods for training convolutional arithmetic circuits, thereby fulfilling their expressive potential, may give rise to a deep learning architecture that is provably superior to convolutional rectifier networks but has so far been overlooked by practitioners." ] }
1905.12207
2947576584
We study deep neural networks with polynomial activations, particularly their expressive power. For a fixed architecture and activation degree, a polynomial neural network defines an algebraic map from weights to polynomials. The image of this map is the functional space associated to the network, and it is an irreducible algebraic variety upon taking closure. This paper proposes the dimension of this variety as a precise measure of the expressive power of polynomial neural networks. We obtain several theoretical results regarding this dimension as a function of architecture, including an exact formula for high activation degrees, as well as upper and lower bounds on layer widths in order for deep polynomials networks to fill the ambient functional space. We also present computational evidence that it is profitable in terms of expressiveness for layer widths to increase monotonically and then decrease monotonically. Finally, we link our study to favorable optimization properties when training weights, and we draw intriguing connections with tensor and polynomial decompositions.
The polynomial networks studied in this paper are not arithmetic circuits, but feedforward deep networks with polynomial @math -th power activations. This is a vast generalization of a setting considered in several recent papers @cite_7 @cite_23 @cite_12 , that study shallow (two layer) networks with quadratic activations ( @math ). These papers show that if the width of the intermediate layer is at least twice the input dimension, then the quadratic loss has no bad'' local minima. This result in line with our Proposition , which explains in this case the functional space is convex and the ambient space. We also point out that polynomial activations are required for the functional space of the network to span a finite dimensional vector space @cite_26 @cite_7 .
{ "cite_N": [ "@cite_26", "@cite_23", "@cite_12", "@cite_7" ], "mid": [ "2043005456", "2962930448", "2963417959", "2788997738" ], "abstract": [ "Several researchers characterized the activation function under which multilayer feedforward networks can act as universal approximators. We show that most of all the characterizations that were reported thus far in the literature are special cases of the following general result: A standard multilayer feedforward network with a locally bounded piecewise continuous activation function can approximate any continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial. We also emphasize the important role of the threshold, asserting that without it the last theorem does not hold.", "", "In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.", "Neural networks provide a rich class of high-dimensional, non-convex optimization problems. Despite their non-convexity, gradient-descent methods often successfully optimize these models. This has motivated a recent spur in research attempting to characterize properties of their loss surface that may be responsible for such success. In particular, several authors have noted that appears to act as a remedy against non-convexity. In this paper, we address this phenomenon by studying key topological properties of the loss, such as the presence or absence of \"spurious valleys\", defined as connected components of sub-level sets that do not include a global minimum. Focusing on a class of two-layer neural networks defined by smooth (but generally non-linear) activation functions, our main contribution is to prove that as soon as the hidden layer size matches the dimension of the reproducing space, defined as the linear functional space generated by the activations, no spurious valleys exist, thus allowing the existence of descent directions. Our setup includes smooth activations such as polynomials, both in the empirical and population risk, and generic activations in the empirical risk case." ] }
1905.12470
2947092077
Adaptive learning, also known as adaptive teaching, relies on learning path recommendation, which sequentially recommends personalized learning items (e.g., lectures, exercises) to satisfy the unique needs of each learner. Although it is well known that modeling the cognitive structure including knowledge level of learners and knowledge structure (e.g., the prerequisite relations) of learning items is important for learning path recommendation, existing methods for adaptive learning often separately focus on either knowledge levels of learners or knowledge structure of learning items. To fully exploit the multifaceted cognitive structure for learning path recommendation, we propose a Cognitive Structure Enhanced framework for Adaptive Learning, named CSEAL. By viewing path recommendation as a Markov Decision Process and applying an actor-critic algorithm, CSEAL can sequentially identify the right learning items to different learners. Specifically, we first utilize a recurrent neural network to trace the evolving knowledge levels of learners at each learning step. Then, we design a navigation algorithm on the knowledge structure to ensure the logicality of learning paths, which reduces the search space in the decision process. Finally, the actor-critic algorithm is used to determine what to learn next and whose parameters are dynamically updated along the learning path. Extensive experiments on real-world data demonstrate the effectiveness and robustness of CSEAL.
The simplest way to generate learning paths is to introduce those methods aiming to solve sequence recommendation problem, e.g., collaborative filtering methods (e.g., KNN @cite_6 , MPR @cite_7 ) and deep learning methods (e.g. GRU4Rec @cite_29 ). For example, @cite_36 introduced Recurrent Neural Network (RNN) to predict the expectation of the whole path for learner groups. Some researches proposed to enhance the recommendation strategy by explicitly using cognitive structure. One branch is to model the evolution of knowledge level. @cite_31 and @cite_19 used the transition matrix in MDP to model the evolution of knowledge level and used reinforcement learning algorithm to evaluate the impact of learning items on knowledge level. Meanwhile, works of the other branch focused on employing the knowledge structure to make a recommendation, for example, @cite_30 made several path generation rules on knowledge structure by expertise and @cite_18 put up a method using semantic inference on ontology to generate learning paths. Previous methods only consider either the importance of knowledge level or knowledge structure without the combination of these two parts. To our best, few of existing works has well established the cognitive structure to make learning path recommendation.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_7", "@cite_36", "@cite_29", "@cite_6", "@cite_19", "@cite_31" ], "mid": [ "2775901116", "1576030471", "2896141483", "2791327891", "2964316331", "2122111042", "2891447705", "2602925161" ], "abstract": [ "Abstract It is difficult for e-learners to make decisions on how to learn when they are facing with a large amount of learning resources, especially when they have to balance available limited learning time and multiple learning objectives in various learning scenarios. This research presented in this paper addresses this challenge by proposing a new multi-constraint learning path recommendation algorithm based on knowledge map. The main contributions of the paper are as follows. Firstly, two hypotheses on e-learners’ different learning path preferences for four different learning scenarios (initial learning, usual review, pre-exam learning and pre-exam review) are verified through questionnaire-based statistical analysis. Secondly, according to learning behavior characteristics of four types of the learning scenarios, a multi-constraint learning path recommendation model is proposed, in which the variables and their weighted coefficients considers different learning path preferences of the learners in different learning scenarios as well as learning resource organization and fragmented time. Thirdly, based on the proposed model and knowledge map, the design and implementation of a multi-constraint learning path recommendation algorithm is described. Finally, it is shown that the questionnaire results from over 110 e-learners verify the effectiveness of the proposed algorithm and show the similarity between the learners’ self-organized learning paths and the recommended learning paths.", "Nowadays, e-learning systems are widely used for education and training in universities and companies because of their electronic course content access and virtual classroom participation. However, with the rapid increase of learning content on the Web, it will be time-consuming for learners to find contents they really want to and need to study. Aiming at enhancing the efficiency and effectiveness of learning, we propose an ontology-based approach for semantic content recommendation towards context-aware e-learning. The recommender takes knowledge about the learner (user context), knowledge about content, and knowledge about the domain being learned into consideration. Ontology is utilized to model and represent such kinds of knowledge. The recommendation consists of four steps: semantic relevance calculation, recommendation refining, learning path generation, and recommendation augmentation. As a result, a personalized, complete, and augmented learning program is suggested for the learner.", "As users implicitly express their preferences to items on many real-world applications, the implicit feedback based collaborative filtering has attracted much attention in recent years. Pairwise methods have shown state-of-the-art solutions for dealing with the implicit feedback, with the assumption that users prefer the observed items to the unobserved items. However, for each user, the huge unobserved items are not equal to represent her preference. In this paper, we propose a Multiple Pairwise Ranking (MPR) approach, which relaxes the simple pairwise preference assumption in previous works by further tapping the connections among items with multiple pairwise ranking criteria. Specifically, we exploit the preference difference among multiple pairs of items by dividing the unobserved items into different parts. Empirical studies show that our algorithms outperform the state-of-the-art methods on real-world datasets.", "Abstract Discovering useful hidden patterns from learner data for online learning systems is valuable in education technology. Studies on personalized learning full-path recommendation are particularly important for the development of advanced E-learning systems. In this paper, we present a novel model of full-path learning recommendation. This model relies on clustering and machine learning techniques. Based on a feature similarity metric on learners, we first cluster a collection of learners and train a long short-term memory (LSTM) model in order to predict their learning paths and performance. Personalized learning full-paths are then selected from the results of path prediction. Finally, a suitable learning full-path is recommended specifically to a test learner. In this study, a series of experiments have been carried out against learning resource datasets. By comparisons, experimental results indicate that our proposed methods are able to make sound recommendations on appropriate learning paths with significantly improved learning results in terms of accuracy and efficiency.", "Abstract: We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.", "The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^ --the minimum probability of error over all decision rules taking underlying probability structure into account. However, in a large sample analysis, we will show in the M -category case that R^ R R^ (2 --MR^ (M-1)) , where these bounds are the tightest possible, for all suitably smooth underlying distributions. Thus for any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.", "", "An adaptive learning system aims at providing instruction tailored to the current status of a learner, differing from the traditional classroom experience. The latest advances in technology make adaptive learning possible, which has the potential to provide students with high-quality learning benefit at a low cost. A key component of an adaptive learning system is a recommendation system, which recommends the next material (video lectures, practices, and so on, on different skills) to the learner, based on the psychometric assessment results and possibly other individual characteristics. An important question then follows: How should recommendations be made? To answer this question, a mathematical framework is proposed that characterizes the recommendation process as a Markov decision problem, for which decisions are made based on the current knowledge of the learner and that of the learning materials. In particular, two plain vanilla systems are introduced, for which the optimal recommendation at each st..." ] }
1905.12470
2947092077
Adaptive learning, also known as adaptive teaching, relies on learning path recommendation, which sequentially recommends personalized learning items (e.g., lectures, exercises) to satisfy the unique needs of each learner. Although it is well known that modeling the cognitive structure including knowledge level of learners and knowledge structure (e.g., the prerequisite relations) of learning items is important for learning path recommendation, existing methods for adaptive learning often separately focus on either knowledge levels of learners or knowledge structure of learning items. To fully exploit the multifaceted cognitive structure for learning path recommendation, we propose a Cognitive Structure Enhanced framework for Adaptive Learning, named CSEAL. By viewing path recommendation as a Markov Decision Process and applying an actor-critic algorithm, CSEAL can sequentially identify the right learning items to different learners. Specifically, we first utilize a recurrent neural network to trace the evolving knowledge levels of learners at each learning step. Then, we design a navigation algorithm on the knowledge structure to ensure the logicality of learning paths, which reduces the search space in the decision process. Finally, the actor-critic algorithm is used to determine what to learn next and whose parameters are dynamically updated along the learning path. Extensive experiments on real-world data demonstrate the effectiveness and robustness of CSEAL.
Deep reinforcement learning, as one of state-of-the-art techniques @cite_10 , has shown superior abilities in many fields @cite_16 . The main idea is to learn and refine model parameters according to task-specific reward signals. For example, @cite_40 introduced reinforcement learning to train an efficient dialogue agent on existing transcripts from clinical trials, which improves mild cognitive impairment prediction; @cite_23 utilized the actor-critic algorithm for treatment recommendation, helping to handle complex relations among multiple medications, diseases and individual characteristics. However, due to three key challenges, the traditional reinforcement learning is difficult to be applied in learning path recommendation: (1) how to represent state; (2) how to avoid the recommendation violating the sequence logicality during exploring; (3) how to reduce the large searching space of learning item paths.
{ "cite_N": [ "@cite_40", "@cite_16", "@cite_10", "@cite_23" ], "mid": [ "2788965295", "2809263292", "2745868649", "2963561234" ], "abstract": [ "Mild cognitive impairment (MCI) is a prodromal phase in the progression from normal aging to dementia, especially Alzheimers disease. Even though there is mild cognitive decline in MCI patients, they have normal overall cognition and thus is challenging to distinguish from normal aging. Using transcribed data obtained from recorded conversational interactions between participants and trained interviewers, and applying supervised learning models to these data, a recent clinical trial has shown a promising result in differentiating MCI from normal aging. However, the substantial amount of interactions with medical staff can still incur significant medical care expenses in practice. In this paper, we propose a novel reinforcement learning (RL) framework to train an efficient dialogue agent on existing transcripts from clinical trials. Specifically, the agent is trained to sketch disease-specific lexical probability distribution, and thus to converse in a way that maximizes the diagnosis accuracy and minimizes the number of conversation turns. We evaluate the performance of the proposed reinforcement learning framework on the MCI diagnosis from a real clinical trial. The results show that while using only a few turns of conversation, our framework can significantly outperform state-of-the-art supervised learning approaches.", "Transcribing content from structural images, e.g., writing notes from music scores, is a challenging task as not only the content objects should be recognized, but the internal structure should also be preserved. Existing image recognition methods mainly work on images with simple content (e.g., text lines with characters), but are not capable to identify ones with more complex content (e.g., structured code), which often follow a fine-grained grammar. To this end, in this paper, we propose a hierarchical Spotlight Transcribing Network (STN) framework followed by a two-stage \"where-to-what'' solution. Specifically, we first decide \"where-to-look'' through a novel spotlight mechanism to focus on different areas of the original image following its structure. Then, we decide \"what-to-write'' by developing a GRU based network with the spotlight areas for transcribing the content accordingly. Moreover, we propose two implementations on the basis of STN, i.e., STNM and STNR, where the spotlight movement follows the Markov property and Recurrent modeling, respectively. We also design a reinforcement method to refine our STN framework by self-improving the spotlight mechanism. We conduct extensive experiments on many structural image datasets, where the results clearly demonstrate the effectiveness of STN framework.", "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.", "Dynamic treatment recommendation systems based on large-scale electronic health records (EHRs) become a key to successfully improve practical clinical outcomes. Prior relevant studies recommend treatments either use supervised learning (e.g. matching the indicator signal which denotes doctor prescriptions), or reinforcement learning (e.g. maximizing evaluation signal which indicates cumulative reward from survival rates). However, none of these studies have considered to combine the benefits of supervised learning and reinforcement learning. In this paper, we propose Supervised Reinforcement Learning with Recurrent Neural Network (SRL-RNN), which fuses them into a synergistic learning framework. Specifically, SRL-RNN applies an off-policy actor-critic framework to handle complex relations among multiple medications, diseases and individual characteristics. The \"actor'' in the framework is adjusted by both the indicator signal and evaluation signal to ensure effective prescription and low mortality. RNN is further utilized to solve the Partially-Observed Markov Decision Process (POMDP) problem due to lack of fully observed states in real world applications. Experiments on the publicly real-world dataset, i.e., MIMIC-3, illustrate that our model can reduce the estimated mortality, while providing promising accuracy in matching doctors' prescriptions." ] }
1905.12345
2947238749
Temporal point process is an expressive tool for modeling event sequences over time. In this paper, we take a reinforcement learning view whereby the observed sequences are assumed to be generated from a mixture of latent policies. The purpose is to cluster the sequences with different temporal patterns into the underlying policies while learning each of the policy model. The flexibility of our model lies in: i) all the components are networks including the policy network for modeling the intensity function of temporal point process; ii) to handle varying-length event sequences, we resort to inverse reinforcement learning by decomposing the observed sequence into states (RNN hidden embedding of history) and actions (time interval to next event) in order to learn the reward function, thus achieving better performance or increasing efficiency compared to existing methods using rewards over the entire sequence such as log-likelihood or Wasserstein distance. We adopt an expectation-maximization framework with the E-step estimating the cluster labels for each sequence, and the M-step aiming to learn the respective policy. Extensive experiments show the efficacy of our method against state-of-the-arts.
Traditional TPP models are mostly developed around the design of the intensity function @math which measures the instantaneous event occurrence rate at time @math , like Reinforced Poisson processes @cite_13 , Self-exciting process (Hawkes process) @cite_8 , Reactive point process @cite_16 , etc. An obvious limitation of these traditional models is that they all assume all the samples obey a single parametric form which is too idealistic for real-world data. This also suggests the need for learning clustered behaviors beyond single model based methods.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_8" ], "mid": [ "2090320383", "2145037371", "138372711" ], "abstract": [ "Massachusetts Institute of Technology and the University of Washington Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time, based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short term prediction of electrical grid failures (“manhole events”), including outages, fires, explosions, and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating, and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulnerability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are i) making continuous-time failure predictions, and ii) cost benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short term horizon, and use to provide a cost benefit analysis of different proactive maintenance programs.", "The models surveyed include generalized Polya urns, reinforced random walks, interacting urn models, and continuous reinforced processes. Emphasis is on methods and results, with sketches provided of some proofs. Applications are discussed in statistics, biology, economics and a number of other areas.", "IN contagious processes (e.g. measles, hijacking, etc.) the occurrence of events increases the probability of further events occurring in the near future. Also several series of events may interact with each other, for example one might consider notifications of some disease in a number of adjacent regions which would interact through infectives or carriers moving between the regions. In this paper we postulate a model for such processes and derive a general expression for the point spectral matrices. These theoretical spectra are useful for comparison with spectra estimated from data and thus provide a means of evaluating the fit of such a model in the manner of Bartlett (1963). The model studied was put forward in an earlier paper (Hawkes, 1971) but the solution was obtained only in special cases. In this paper an elegant solution is obtained for the. general case. Consider a stationary k-variate point process N(t), where Ni(t) represents the cumulative number of events in the ith process up to time t, with intensity vector X = of dN(t) dt and covariance density matrix" ] }
1905.12345
2947238749
Temporal point process is an expressive tool for modeling event sequences over time. In this paper, we take a reinforcement learning view whereby the observed sequences are assumed to be generated from a mixture of latent policies. The purpose is to cluster the sequences with different temporal patterns into the underlying policies while learning each of the policy model. The flexibility of our model lies in: i) all the components are networks including the policy network for modeling the intensity function of temporal point process; ii) to handle varying-length event sequences, we resort to inverse reinforcement learning by decomposing the observed sequence into states (RNN hidden embedding of history) and actions (time interval to next event) in order to learn the reward function, thus achieving better performance or increasing efficiency compared to existing methods using rewards over the entire sequence such as log-likelihood or Wasserstein distance. We adopt an expectation-maximization framework with the E-step estimating the cluster labels for each sequence, and the M-step aiming to learn the respective policy. Extensive experiments show the efficacy of our method against state-of-the-arts.
There are alternative objectives for TPP learning for both parametric models and neural models. Traditional methods mostly follow the maximum likelihood estimation (MLE) procedure under the probabilistic framework @cite_17 . While the MLE objective may not be the only choice. This is because we are often given only a limited number of sequences which may further contain arbitrary noises. Recent efforts have been made for devising adversarial learning based objective inspired by generative adversarial networks (GAN) @cite_11 and especially Wasserstein GAN @cite_27 . In @cite_12 , adversarial objective is developed in addition with MLE loss by approximating the continuous domain predictions using a discrete time series. In @cite_32 , Wasserstein distance over temporal event sequences is explicitly defined to learn a deep generative point process model for temporal events generation. Another line of works consider the challenge for learning high-dimensional TPP models, whereby the so-called infectivity matrix to be learned can be of squared size of the dimensionality. One popular technique is imposing low-rank regularizer @cite_31 or factorization model @cite_29 on the infectivity matrix. However, they do not explicitly deal with the sequence clustering problem. In fact, the observed dimension marker does not correspond to the underlying cluster.
{ "cite_N": [ "@cite_29", "@cite_32", "@cite_17", "@cite_27", "@cite_31", "@cite_12", "@cite_11" ], "mid": [ "2962718388", "2963148415", "1999735946", "2739748921", "2101645017", "2807751666", "2099471712" ], "abstract": [ "This paper presents a factorial marked temporal point process model and presents efficient learning methods. In conventional (multi-dimensional) marked temporal point process models, an event is often encoded by a single discrete variable (marker). We describe the factorial marked point processes whereby time-stamped event is factored into multiple markers. Accordingly the size of the infectivity matrix modeling the effect between pairwise markers is in exponential order regarding the number of discrete markers. We propose a decoupled learning method with two learning procedures: i) directly solving the model based on two techniques: Alternating Direction Method of Multipliers and Fast Iterative Shrinkage-Thresholding Algorithm; ii) involving a reformulation that transforms the original problem into a Logistic Regression model for more efficient learning. Moreover, a sparse group regularizer is added to identify the key profile features and event labels. Empirical results on real world datasets demonstrate the efficiency of our decoupled and reformulated method.", "Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model's expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.", "A maximum likelihood estimation procedure of Hawkes' self-exciting point process model is proposed with explicit presentations of the log-likelihood of the model and its gradient and Hessian. A simulation method of the process is also presented. Some numerical results are given.", "", "How will the behaviors of individuals in a social network be influenced by their neighbors, the authorities and the communities in a quantitative way? Such critical and valuable knowledge is unfortunately not readily accessible and we tend to only observe its manifestation in the form of recurrent and time-stamped events occurring at the individuals involved in the social network. It is an important yet challenging problem to infer the underlying network of social inference based on the temporal patterns of those historical events that we can observe. In this paper, we propose a convex optimization approach to discover the hidden network of social influence by modeling the recurrent events at different individuals as multidimensional Hawkes processes, emphasizing the mutual-excitation nature of the dynamics of event occurrence. Furthermore, our estimation procedure, using nuclear and l1 norm regularization simultaneously on the parameters, is able to take into account the prior knowledge of the presence of neighbor interaction, authority influence, and community coordination in the social network. To efficiently solve the resulting optimization problem, we also design an algorithm ADM4 which combines techniques of alternating direction method of multipliers and majorization minimization. We experimented with both synthetic and real world data sets, and showed that the proposed method can discover the hidden network more accurately and produce a better predictive model than several baselines.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1905.12345
2947238749
Temporal point process is an expressive tool for modeling event sequences over time. In this paper, we take a reinforcement learning view whereby the observed sequences are assumed to be generated from a mixture of latent policies. The purpose is to cluster the sequences with different temporal patterns into the underlying policies while learning each of the policy model. The flexibility of our model lies in: i) all the components are networks including the policy network for modeling the intensity function of temporal point process; ii) to handle varying-length event sequences, we resort to inverse reinforcement learning by decomposing the observed sequence into states (RNN hidden embedding of history) and actions (time interval to next event) in order to learn the reward function, thus achieving better performance or increasing efficiency compared to existing methods using rewards over the entire sequence such as log-likelihood or Wasserstein distance. We adopt an expectation-maximization framework with the E-step estimating the cluster labels for each sequence, and the M-step aiming to learn the respective policy. Extensive experiments show the efficacy of our method against state-of-the-arts.
The mostly related work to our method appears in @cite_19 as it deals with a similar problem setting: grouping event sequences into different clusters and learn the respective TPP model parameters for each cluster. However, the technical approaches are completely different. First the parametric model @cite_19 is tailored to Hawkes process while our network based model is more general; Second, the work @cite_19 is under the Bayesian probabilistic framework while our method is likelihood-free and incorporates both adversarial learning and inverse reinforcement learning @cite_20 for more effective objective design beyond MLE; We show in the experiments that our method significantly outperforms @cite_19 on real-world data. Source code will be made public available for reproducible research.
{ "cite_N": [ "@cite_19", "@cite_20" ], "mid": [ "2963857076", "2061562262" ], "abstract": [ "How to cluster event sequences generated via different point processes is an interesting and important problem in statistical machine learning. To solve this problem, we propose and discuss an effective model-based clustering method based on a novel Dirichlet mixture model of a special but significant type of point processes --- Hawkes process. The proposed model generates the event sequences with different clusters from the Hawkes processes with different parameters, and uses a Dirichlet process as the prior distribution of the clusters. We prove the identifiability of our mixture model and propose an effective variational Bayesian inference algorithm to learn our model. An adaptive inner iteration allocation strategy is designed to accelerate the convergence of our algorithm. Moreover, we investigate the sample complexity and the computational complexity of our learning algorithm in depth. Experiments on both synthetic and real-world data show that the clustering method based on our model can learn structural triggering patterns hidden in asynchronous event sequences robustly and achieve superior performance on clustering purity and consistency compared to existing methods.", "Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ..." ] }
1905.12345
2947238749
Temporal point process is an expressive tool for modeling event sequences over time. In this paper, we take a reinforcement learning view whereby the observed sequences are assumed to be generated from a mixture of latent policies. The purpose is to cluster the sequences with different temporal patterns into the underlying policies while learning each of the policy model. The flexibility of our model lies in: i) all the components are networks including the policy network for modeling the intensity function of temporal point process; ii) to handle varying-length event sequences, we resort to inverse reinforcement learning by decomposing the observed sequence into states (RNN hidden embedding of history) and actions (time interval to next event) in order to learn the reward function, thus achieving better performance or increasing efficiency compared to existing methods using rewards over the entire sequence such as log-likelihood or Wasserstein distance. We adopt an expectation-maximization framework with the E-step estimating the cluster labels for each sequence, and the M-step aiming to learn the respective policy. Extensive experiments show the efficacy of our method against state-of-the-arts.
As shown in Fig. , this paper takes a reinforcement learning (RL) perspective on the modeling and clustering of temporal point processes for its dynamic sequence nature. Though there exist works @cite_34 @cite_1 @cite_2 on (deep) RL and intervention of TPP, while little effort ( @cite_19 does not involve deep model nor RL) has been paid on TPP clustering which calls additional careful treatment on disentangling the mixture of policies. Using the language of RL, suppose there is a number of event sequences generated (with noise) by @math underlying expert policies, which can be reflected in the form of @math clusters. In this sense, we formulate the event sequence clustering task as a reinforcement learning problem whereby the purpose is to discover the unknown event generation policies, and meanwhile the learning cost function for fitting event sequences is also automatically learned from data using IRL.
{ "cite_N": [ "@cite_19", "@cite_34", "@cite_1", "@cite_2" ], "mid": [ "2963857076", "2963043442", "2890009182", "2804072087" ], "abstract": [ "How to cluster event sequences generated via different point processes is an interesting and important problem in statistical machine learning. To solve this problem, we propose and discuss an effective model-based clustering method based on a novel Dirichlet mixture model of a special but significant type of point processes --- Hawkes process. The proposed model generates the event sequences with different clusters from the Hawkes processes with different parameters, and uses a Dirichlet process as the prior distribution of the clusters. We prove the identifiability of our mixture model and propose an effective variational Bayesian inference algorithm to learn our model. An adaptive inner iteration allocation strategy is designed to accelerate the convergence of our algorithm. Moreover, we investigate the sample complexity and the computational complexity of our learning algorithm in depth. Experiments on both synthetic and real-world data show that the clustering method based on our model can learn structural triggering patterns hidden in asynchronous event sequences robustly and achieve superior performance on clustering purity and consistency compared to existing methods.", "", "Many real world problems from sustainability, healthcare and Internet generate discrete events in continuous time. The generative processes of these data can be very complex, requiring flexible models to capture their dynamics. Temporal point processes offer an elegant framework for modeling such event data. However, sophisticated point process models typically leads to intractable likelihood functions, making model fitting difficult in practice. We address this challenge from the perspective of reinforcement learning (RL), and relate the intensity function of a point process to a stochastic policy in reinforcement learning. We parameterize the policy as a flexible recurrent neural network, and reward models which can mimic the observed event distribution. Since the reward function is unknown in practice, we also uncover an analytic form of the reward function using an inverse reinforcement learning formulation and functions from a reproducing kernel Hilbert space. This new RL framework allows us to derive an efficient policy gradient algorithm for learning flexible point process models, and we show that it performs well in both synthetic and real data.", "In a wide variety of applications, humans interact with a complex environment by means of asynchronous stochastic discrete events in continuous time. Can we design online interventions that will help humans achieve certain goals in such asynchronous setting? In this paper, we address the above problem from the perspective of deep reinforcement learning of marked temporal point processes, where both the actions taken by an agent and the feedback it receives from the environment are asynchronous stochastic discrete events characterized using marked temporal point processes. In doing so, we define the agent's policy using the intensity and mark distribution of the corresponding process and then derive a flexible policy gradient method, which embeds the agent's actions and the feedback it receives into real-valued vectors using deep recurrent neural networks. Our method does not make any assumptions on the functional form of the intensity and mark distribution of the feedback and it allows for arbitrarily complex reward functions. We apply our methodology to two different applications in personalized teaching and viral marketing and, using data gathered from Duolingo and Twitter, we show that it may be able to find interventions to help learners and marketers achieve their goals more effectively than alternatives." ] }
1905.12294
2947984106
In many high-dimensional estimation problems the main task consists in minimizing a cost function, which is often strongly non-convex when scanned in the space of parameters to be estimated. A standard solution to flatten the corresponding rough landscape consists in summing the losses associated to different data points and obtain a smoother empirical risk. Here we propose a complementary method that works for a single data point. The main idea is that a large amount of the roughness is uncorrelated in different parts of the landscape. One can then substantially reduce the noise by evaluating an empirical average of the gradient obtained as a sum over many random independent positions in the space of parameters to be optimized. We present an algorithm, called Replicated Gradient Descent, based on this idea and we apply it to tensor PCA, which is a very hard estimation problem. We show that Replicated Gradient Descent over-performs physical algorithms such as gradient descent and approximate message passing and matches the best algorithmic thresholds known so far, obtained by tensor unfolding and methods based on sum-of-squares.
Different procedures have been devised to regularize a rough landscape and improve optimization performance. One approach is based on the convolution of a rough energy function with a smoothing kernel @cite_20 . Another procedure is based on the introduction of different copies of the system which are coupled together @cite_6 . In both cases, the idea is to reduce the roughness by smoothing the landscape locally . Our method, instead, aims at reducing the roughness by a global average over uncorrelated copies. Among the many algorithms devised for tensor PCA, the one based on homotopy @cite_13 is the closest one to RGD, although it was introduced from a very different perspective. We will comment in more detail later the similarities between the two methods. Finally, we point out that the optimality gap between "non-local" and statistical physics methods was very recently bridged by an extension of approximate message passing based on the Kikuchi approximation @cite_18 . Our results show that the gap can be also closed by using an extension of gradient descent. In this way a full redemption @cite_18 of the landscape dominated statistical physics approach against sophisticated "non-local" algorithms is reached.
{ "cite_N": [ "@cite_18", "@cite_13", "@cite_6", "@cite_20" ], "mid": [ "2933703783", "2548496902", "2400359904", "2074738960" ], "abstract": [ "For the tensor PCA (principal component analysis) problem, we propose a new hierarchy of algorithms that are increasingly powerful yet require increasing runtime. Our hierarchy is analogous to the sum-of-squares (SOS) hierarchy but is instead inspired by statistical physics and related algorithms such as belief propagation and AMP (approximate message passing). Our level- @math algorithm can be thought of as a (linearized) message-passing algorithm that keeps track of @math -wise dependencies among the hidden variables. Specifically, our algorithms are spectral methods based on the Kikuchi Hessian matrix, which generalizes the well-studied Bethe Hessian matrix to the higher-order Kikuchi free energies. It is known that AMP, the flagship algorithm of statistical physics, has substantially worse performance than SOS for tensor PCA. In this work we redeem' the statistical physics approach by showing that our hierarchy gives a polynomial-time algorithm matching the performance of SOS. Our hierarchy also yields a continuum of subexponential-time algorithms, and we prove that these achieve the same (conjecturally optimal) tradeoff between runtime and statistical power as SOS. Our results hold for even-order tensors, and we conjecture that they also hold for odd-order tensors. Our methods suggest a new avenue for systematically obtaining optimal algorithms for Bayesian inference problems, and our results constitute a step toward unifying the statistical physics and sum-of-squares approaches to algorithm design.", "Developing efficient and guaranteed nonconvex algorithms has been an important challenge in modern machine learning. Algorithms with good empirical performance such as stochastic gradient descent often lack theoretical guarantees. In this paper, we analyze the class of homotopy or continuation methods for global optimization of nonconvex functions. These methods start from an objective function that is efficient to optimize (e.g. convex), and progressively modify it to obtain the required objective, and the solutions are passed along the homotopy path. For the challenging problem of tensor PCA, we prove global convergence of the homotopy method in the \"high noise\" regime. The signal-to-noise requirement for our algorithm is tight in the sense that it matches the recovery guarantee for the best degree-4 sum-of-squares algorithm. In addition, we prove a phase transition along the homotopy path for tensor PCA. This allows to simplify the homotopy method to a local search algorithm, viz., tensor power iterations, with a specific initialization and a noise injection procedure, while retaining the theoretical guarantees.", "In artificial neural networks, learning from data is a computationally demanding task in which a large number of connection weights are iteratively tuned through stochastic-gradient-based heuristic processes over a cost-function. It is not well understood how learning occurs in these systems, in particular how they avoid getting trapped in configurations with poor computational performance. Here we study the difficult case of networks with discrete weights, where the optimization landscape is very rough even for simple architectures, and provide theoretical and numerical evidence of the existence of rare---but extremely dense and accessible---regions of configurations in the network weight space. We define a novel measure, which we call the (RE), which suppresses trapping by isolated configurations and amplifies the role of these dense regions. We analytically compute the RE in some exactly solvable models, and also provide a general algorithmic scheme which is straightforward to implement: define a cost-function given by a sum of a finite number of replicas of the original cost-function, with a constraint centering the replicas around a driving assignment. To illustrate this, we derive several powerful new algorithms, ranging from Markov Chains to message passing to gradient descent processes, where the algorithms target the robust dense states, resulting in substantial improvements in performance. The weak dependence on the number of precision bits of the weights leads us to conjecture that very similar reasoning applies to more conventional neural networks. Analogous algorithmic schemes can also be applied to other optimization problems.", "This paper discusses a generalization of the function transformation scheme used in Coleman, Shalloway, and Wu [Comput. Optim. Appl., 2 (1993), pp. 145--170; J. Global Optim., 4 (1994), pp. 171--185] and Shalloway [Global Optimization, C. Floudas and P. Pardalos, eds., Princeton University Press, 1992, pp. 433--477; Global Optim., 2 (1992), pp. 281--311] for global energy minimization applied to the molecular conformation problem. A mathematical theory for the method as a special continuation approach to global optimization is established. We show that the method can transform a nonlinear objective function into a class of gradually deformed, but smoother'' or easier'' functions. An optimization procedure can then be successively applied to the new functions to trace their solutions back to the original function. Two types of transformation are defined: isotropic and anisotropic. We show that both transformations can be applied to a large class of nonlinear partially separable functions, including energy functions for molecular conformation. Methods to compute the transformation for these functions are given." ] }
1905.12002
2974804679
In this paper, using stochastic geometry tools, we develop a systematic framework to characterize the meta distributions of the downlink signal-to-interference-ratio (SIR) signal-to-noise-ratio (SNR) and data rate of a typical device in a Internet of Things (IoT) cellular network with coexisting sub-6GHz and millimeter wave (mm-wave) spectrums. Macro base-stations (MBSs) transmit on sub-6GHz channels (which we term microwave'' channels), whereas small base-stations (SBSs) communicate with devices on mm-wave channels. The SBSs are connected to MBSs via a microwave ( @math wave) wireless backhaul. The @math wave channels are interference limited and mm-wave channels are noise limited; therefore, we have the meta-distribution of SIR and SNR in microwave and mm-wave channels, respectively. To model the line-of-sight (LOS) nature of mm-wave channels, we use Nakagami-m fading model. To derive the meta-distribution of SIR SNR, we characterize the conditional success probability (CSP) (or equivalently reliability) and its @math moment for a typical device (a) when it associates to a @math wave MBS for direct transmission, and (b) when it associates to a mm-wave SBS for dual-hop transmission (backhaul and access transmission). Performance metrics such as the mean and variance of the local delay (network jitter), mean of the CSP (coverage probability), and variance of the CSP are derived. Closed-form expressions are presented for special scenarios. The extensions of the developed framework to the @math wave-only IoT network or IoT networks where SBSs have mm-wave backhauls are discussed. Numerical results validate the analytical results. Insights are extracted related to the reliability, coverage probability, and latency of the considered network.
Some recent studies analyzed the success probability of coexisting @math wave and mm-wave cellular networks. A hybrid cellular network was considered by @cite_18 to estimate the uplink-downlink coverage and rate distribution of self-backhauled mm-wave networks. @cite_5 developed an analytical model to characterize decoupled uplink and downlink cell association strategies. The authors showed the superiority of this technique compared to the traditional coupled association in a network with traditional MBSs coexisting with denser mm-wave SBSs. @cite_18 and @cite_5 modeled the fading power as Rayleigh fading to enable better tractability.
{ "cite_N": [ "@cite_5", "@cite_18" ], "mid": [ "2287503382", "1953553238" ], "abstract": [ "Millimeter wave (mmWave) links will offer high capacity but are poor at penetrating into or diffracting around solid objects. Thus, we consider a hybrid cellular network with traditional sub-6 GHz macrocells coexisting with denser mmWave small cells, where a mobile user can connect to either opportunistically. We develop a general analytical model to characterize and derive the uplink and downlink cell association in the view of the signal-to-interference-and-noise-ratio and rate coverage probabilities in such a mixed deployment. We offer extensive validation of these analytical results (which rely on several simplifying assumptions) with simulation results. Using the analytical results, different decoupled uplink and downlink cell association strategies are investigated and their superiority is shown compared with the traditional coupled approach. Finally, small cell biasing in mmWave is studied, and we show that unprecedented biasing values are desirable due to the wide bandwidth.", "Millimeter wave (mmWave) cellular systems will require high-gain directional antennas and dense base station (BS) deployments to overcome a high near-field path loss and poor diffraction. As a desirable side effect, high-gain antennas offer interference isolation, providing an opportunity to incorporate self-backhauling , i.e., BSs backhauling among themselves in a mesh architecture without significant loss in the throughput, to enable the requisite large BS densities. The use of directional antennas and resource sharing between access and backhaul links leads to coverage and rate trends that significantly differ from conventional UHF cellular systems. In this paper, we propose a general and tractable mmWave cellular model capturing these key trends and characterize the associated rate distribution. The developed model and analysis are validated using actual building locations from dense urban settings and empirically derived path loss models. The analysis shows that, in sharp contrast to the interference-limited nature of UHF cellular networks, the spectral efficiency of mmWave networks (besides the total rate) also increases with the BS density, particularly at the cell edge. Increasing the system bandwidth does not significantly influence the cell edge rate, although it boosts the median and peak rates. With self-backhauling, different combinations of the wired backhaul fraction (i.e., the fraction of BSs with a wired connection) and the BS density are shown to guarantee the same median rate (QoS)." ] }
1905.12131
2947341830
Deep kernel learning provides an elegant and principled framework for combining the structural properties of deep learning algorithms with the flexibility of kernel methods. By means of a deep neural network, it consists of learning a kernel operator which is combined with a differentiable kernel algorithm for inference. While previous work within this framework has mostly explored learning a single kernel for large datasets, we focus herein on learning a kernel family for a variety of tasks in few-shot regression settings. Compared to single deep kernel learning, our novel algorithm permits finding the appropriate kernel for each task during inference, rather than using the same for all tasks. As such, our algorithm performs more effectively with complex task distributions in few-shot learning, which we demonstrate by benchmarking against existing state-of-the-art algorithms using real-world, few-shot regression tasks related to drug discovery.
Across the spectrum of learning approaches, DKL methods lie between neural networks and kernel methods. While neural networks can learn from a very large amount of data without much prior knowledge, kernel methods learn from fewer data when given an appropriate covariance function that accounts for prior knowledge of the relevant task. In the first DKL attempt, combined GP with CNN to learn a covariance function adapted to a task from large amounts of data, though the large time and space complexity of kernel methods forced the approximation of the exact kernel using KISS-GP @cite_7 . have demonstrated that such approximation is not necessary using finite rank kernels. Here, we also show that learning from a collection of tasks (FSR mode) does not require any approximation when the covariance function is shared across tasks. This is an important distinction between our study and other existing studies in DKL, which learn their kernel from single tasks instead of task collections.
{ "cite_N": [ "@cite_7" ], "mid": [ "1702387805" ], "abstract": [ "We introduce a new structured kernel interpolation (SKI) framework, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs). SKI methods produce kernel approximations for fast computations through kernel interpolation. The SKI framework clarifies how the quality of an inducing point approach depends on the number of inducing (aka interpolation) points, interpolation strategy, and GP covariance kernel. SKI also provides a mechanism to create new scalable kernel methods, through choosing different kernel interpolation strategies. Using SKI, with local cubic kernel interpolation, we introduce KISSGP, which is 1) more scalable than inducing point alternatives, 2) naturally enables Kronecker and Toeplitz algebra for substantial additional gains in scalability, without requiring any grid data, and 3) can be used for fast and expressive kernel learning. KISS-GP costs O(n) time and storage for GP inference. We evaluate KISS-GP for kernel matrix approximation, kernel learning, and natural sound modelling." ] }
1905.12156
2947023800
Most existing super-resolution methods do not perform well in real scenarios due to lack of realistic training data and information loss of the model input. To solve the first problem, we propose a new pipeline to generate realistic training data by simulating the imaging process of digital cameras. And to remedy the information loss of the input, we develop a dual convolutional neural network to exploit the originally captured radiance information in raw images. In addition, we propose to learn a spatially-variant color transformation which helps more effective color corrections. Extensive experiments demonstrate that super-resolution with raw data helps recover fine details and clear structures, and more importantly, the proposed network and data generation pipeline achieve superior results for single image super-resolution in real scenarios.
Super-resolution. Most state-of-the-art super-resolution methods @cite_30 @cite_37 @cite_18 @cite_7 @cite_3 @cite_29 @cite_4 @cite_12 learn CNNs to reconstruct high-resolution images from low-resolution color inputs. Dong al @cite_7 propose a three-layer CNN for mapping the low-resolution patches to high-resolution space, but fail to get better results with deeper networks @cite_31 . To solve this problem, Kim al @cite_37 introduce residual learning to accelerate training and achieve better results. Tong al @cite_18 use dense skip connections to further speed up the reconstruction process. While these methods are effective in interpolating pixels, they are based on preprocessed color images and thus have limitations in producing realistic details. By contrast, we propose to exploit both raw data and color image in a unified framework for better super-resolution.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_18", "@cite_4", "@cite_7", "@cite_29", "@cite_3", "@cite_31", "@cite_12" ], "mid": [ "1919542679", "", "2780544323", "", "54257720", "", "2964101377", "", "2780624730" ], "abstract": [ "Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality.", "", "Recent studies have shown that the performance of single-image super-resolution methods can be significantly boosted by using deep convolutional neural networks. In this study, we present a novel single-image super-resolution method by introducing dense skip connections in a very deep network. In the proposed network, the feature maps of each layer are propagated into all subsequent layers, providing an effective way to combine the low-level features and high-level features to boost the reconstruction performance. In addition, the dense skip connections in the network enable short paths to be built directly from the output to each layer, alleviating the vanishing-gradient problem of very deep networks. Moreover, deconvolution layers are integrated into the network to learn the upsampling filters and to speedup the reconstruction process. Further, the proposed method substantially reduces the number of parameters, enhancing the computational efficiency. We evaluate the proposed method using images from four benchmark datasets and set a new state of the art.", "", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "", "A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.", "", "We present an algorithm to directly restore a clear highresolution image from a blurry low-resolution input. This problem is highly ill-posed and the basic assumptions for existing super-resolution methods (requiring clear input) and deblurring methods (requiring high-resolution input) no longer hold. We focus on face and text images and adopt a generative adversarial network (GAN) to learn a category-specific prior to solve this problem. However, the basic GAN formulation does not generate realistic highresolution images. In this work, we introduce novel training losses that help recover fine details. We also present a multi-class GAN that can process multi-class image restoration tasks, i.e., face and text images, using a single generator network. Extensive experiments demonstrate that our method performs favorably against the state-of-the-art methods on both synthetic and real-world images at a lower computational cost." ] }
1905.12156
2947023800
Most existing super-resolution methods do not perform well in real scenarios due to lack of realistic training data and information loss of the model input. To solve the first problem, we propose a new pipeline to generate realistic training data by simulating the imaging process of digital cameras. And to remedy the information loss of the input, we develop a dual convolutional neural network to exploit the originally captured radiance information in raw images. In addition, we propose to learn a spatially-variant color transformation which helps more effective color corrections. Extensive experiments demonstrate that super-resolution with raw data helps recover fine details and clear structures, and more importantly, the proposed network and data generation pipeline achieve superior results for single image super-resolution in real scenarios.
Joint super-resolution and demosaicing. Many existing methods for this problem estimate a high-resolution color image with multiple low-resolution frames @cite_26 @cite_6 . More closely related to our task, Zhou al @cite_10 propose a deep residual network for single image super-resolution with mosaiced images. However, this model is trained on gamma-corrected image pairs which may not work well for real linear data. More importantly, these works do not consider the complex color correction steps applied by camera ISPs, and thus cannot recover high-fidelity color appearances. Different from them, the proposed algorithm solves the problems of image restoration and color correction simultaneously, which are more suitable for real applications.
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_6" ], "mid": [ "2119302101", "2788066301", "1988739356" ], "abstract": [ "In the last two decades, two related categories of problems have been studied independently in image restoration literature: super-resolution and demosaicing. A closer look at these problems reveals the relation between them, and, as conventional color digital cameras suffer from both low-spatial resolution and color-filtering, it is reasonable to address them in a unified context. In this paper, we propose a fast and robust hybrid method of super-resolution and demosaicing, based on a maximum a posteriori estimation technique by minimizing a multiterm cost function. The L sub 1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low-resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used for spatially regularizing the luminance component, resulting in sharp edges and forcing interpolation along the edges and not across them. Simultaneously, Tikhonov regularization is used to smooth the chrominance components. Finally, an additional regularization term is used to force similar edge location and orientation in different color channels. We show that the minimization of the total cost function is relatively easy and fast. Experimental results on synthetic and real data sets confirm the effectiveness of our method.", "In digital photography, two image restoration tasks have been studied extensively and resolved independently: demosaicing and super-resolution. Both these tasks are related to resolution limitations of the camera. Performing super-resolution on a demosaiced images simply exacerbates the artifacts introduced by demosaicing. In this paper, we show that such accumulation of errors can be easily averted by jointly performing demosaicing and super-resolution. To this end, we propose a deep residual network for learning an end-to-end mapping between Bayer images and high-resolution images. By training on high-quality samples, our deep residual demosaicing and super-resolution network is able to recover high-quality super-resolved images from low-resolution Bayer mosaics in a single step without producing the artifacts common to such processing when the two operations are done separately. We perform extensive experiments to show that our deep residual network achieves demosaiced and super-resolved images that are superior to the state-of-the-art both qualitatively and in terms of PSNR and SSIM metrics.", "We present a new algorithm that performs demosaicing and super-resolution jointly from a set of raw images sampled with a color filter array. Such a combined approach allows us to compute the alignment parameters between the images on the raw camera data before interpolation artifacts are introduced. After image registration, a high resolution color image is reconstructed at once using the full set of images. For this, we use normalized convolution, an image interpolation method from a nonuniform set of samples. Our algorithm is tested and compared to other approaches in simulations and practical experiments." ] }
1905.12156
2947023800
Most existing super-resolution methods do not perform well in real scenarios due to lack of realistic training data and information loss of the model input. To solve the first problem, we propose a new pipeline to generate realistic training data by simulating the imaging process of digital cameras. And to remedy the information loss of the input, we develop a dual convolutional neural network to exploit the originally captured radiance information in raw images. In addition, we propose to learn a spatially-variant color transformation which helps more effective color corrections. Extensive experiments demonstrate that super-resolution with raw data helps recover fine details and clear structures, and more importantly, the proposed network and data generation pipeline achieve superior results for single image super-resolution in real scenarios.
Learning-based raw image processing. In recent years, learning-based methods have been proposed for raw image processing @cite_35 @cite_23 @cite_19 . Jiang al @cite_35 propose to learn a large collection of local linear filters to approximate the complex nonlinear ISP pipelines. Following their work, Schwartz al @cite_19 use deep CNNs for learning the color correction operations of specific digital cameras. Chen al @cite_23 train a neural network with raw data as input for fast low-light imaging. In this work, we learn color correction in the context of raw image super-resolution. Instead of learning a color correction pipeline for one specific camera, we use a low-resolution color image as reference for handling images from more diverse ISP systems.
{ "cite_N": [ "@cite_19", "@cite_35", "@cite_23" ], "mid": [ "2949775781", "2327935541", "2799265886" ], "abstract": [ "We present DeepISP, a full end-to-end deep neural model of the camera image signal processing pipeline. Our model learns a mapping from the raw low-light mosaiced image to the final visually compelling image and encompasses low-level tasks, such as demosaicing and denoising, as well as higher-level tasks, such as color correction and image adjustment. The training and evaluation of the pipeline were performed on a dedicated data set containing pairs of low-light and well-lit images captured by a Samsung S7 smartphone camera in both raw and processed JPEG formats. The proposed solution achieves the state-of-the-art performance in objective evaluation of peak signal-to-noise ratio on the subtask of joint denoising and demosaicing. For the full end-to-end pipeline, it achieves better visual quality compared to the manufacturer ISP, in both a subjective human assessment and when rated by a deep model trained for assessing image quality.", "Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.", "Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce blur and is often impractical. A variety of denoising, deblurring, and enhancement techniques have been proposed, but their effectiveness is limited in extreme conditions, such as video-rate imaging at night. To support the development of learning-based pipelines for low-light image processing, we introduce a dataset of raw short-exposure low-light images, with corresponding long-exposure reference images. Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work." ] }
1905.12104
2947331922
In many real world situations, collective decisions are made using voting. Moreover, scenarios such as committee or board elections require voting rules that return multiple winners. In multi-winner approval voting (AV), an agent may vote for as many candidates as they wish. Winners are chosen by tallying up the votes and choosing the top- @math candidates receiving the most votes. An agent may manipulate the vote to achieve a better outcome by voting in a way that does not reflect their true preferences. In complex and uncertain situations, agents may use heuristics to strategize, instead of incurring the additional effort required to compute the manipulation which most favors this http URL this paper, we examine voting behavior in multi-winner approval voting scenarios with complete information. We show that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation. Instead, voters tend to prioritize the candidates with the highest utilities. Using simulations, we demonstrate the effectiveness of these heuristics in situations where agents only have access to partial information.
Approval voting is a set of methods for aggregating group preferences that are particularly popular among economists, computer scientists, psychologists, and beyond @cite_23 @cite_6 . There are even multiple political action committees (PACs) in the United States, e.g., The Center for Election Science https: www.electionscience.org , that are committed to seeing the United States change voting procedures from plurality to approval voting. Perhaps one reason for this popularity is the idea that participants are allowed to express preference over a set of candidates and not just a single one. In France, a large study was run parallel to the 2002 election showing that many voters would have preferred approval ballots to traditional plurality ballots @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_6", "@cite_23" ], "mid": [ "1979145342", "", "587373156" ], "abstract": [ "This paper presents a large-scale experiment on the Approval Voting rule that took place during the 2002 French presidential election. We describe the experiment and its main results. The findings are as follows: (i) Such an experiment is feasible, and very well accepted by voters. (ii) The principle of approval voting is easily understood and accepted. (iii) Within the observed political context, compared to the official first-round vote, approval voting modifies the overall ranking of candidates. (iv) The candidates Le Pen and Chirac, more than the others, were able to convert approval votes into official first-round votes.", "", "to the Handbook on Approval Voting.- History of Approval Voting.- Acclamation Voting in Sparta: An Early Use of Approval Voting.- Going from Theory to Practice: The Mixed Success of Approval Voting.- Axiomatic Theory.- Collective Choice for Simple Preferences.- Axiomatizations of Approval Voting.- Committees.- Approval Balloting for Multi-winner Elections.- Does Choosing Committees from Approval Balloting Fulfill the Electorate's Will?.- Strategic Voting.- The Basic Approval Voting Game.- Approval Voting in Large Electorates.- Computational Aspects of Approval Voting.- Probabilistic Exercises.- On the Condorcet Efficiency of Approval Voting and Extended Scoring Rules for Three Alternatives.- Behavioral Heterogeneity Under Approval and Plurality Voting.- In Silico Voting Experiments.- Experiments.- Laboratory Experiments on Approval Voting.- Framed Field Experiments on Approval Voting: Lessons from the 2002 and 2007 French Presidential Elections.- Approval Voting in Germany: Description of a Field Experiment.- Electoral Competition.- Classical Electoral Competition Under Approval Voting.- Policy Moderation and Endogenous Candidacy in Approval Voting Elections.- Meaning for Individual and Society.- Describing Society Through Approval Data.- Approval as an Intrinsic Part of Preference." ] }
1905.12104
2947331922
In many real world situations, collective decisions are made using voting. Moreover, scenarios such as committee or board elections require voting rules that return multiple winners. In multi-winner approval voting (AV), an agent may vote for as many candidates as they wish. Winners are chosen by tallying up the votes and choosing the top- @math candidates receiving the most votes. An agent may manipulate the vote to achieve a better outcome by voting in a way that does not reflect their true preferences. In complex and uncertain situations, agents may use heuristics to strategize, instead of incurring the additional effort required to compute the manipulation which most favors this http URL this paper, we examine voting behavior in multi-winner approval voting scenarios with complete information. We show that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation. Instead, voters tend to prioritize the candidates with the highest utilities. Using simulations, we demonstrate the effectiveness of these heuristics in situations where agents only have access to partial information.
The complexity of manipulation for various types of approval voting (AV) has received considerable attention in the Computational Social Choice (COMSOC) literature @cite_35 . COMSOC is an area of research that investigates computational issues surrounding the aggregation of individual preferences. To start, assuming that agents act rationally and have full information about the votes of other agents, when agents have , i.e., when all agents either have utility 1 or 0 for candidates they approve or disapprove of, respectively, AV is strategy-proof. When agents have general utilities, finding a vote that maximizes the agent's utilities can be computed in polynomial time @cite_3 @cite_4 . For variants of AV including Proportional Approval Voting, Satisfaction Approval Voting, and the Repeated Approval Voting, the complexity of finding utility maximizing votes, even if they are manipulations, ranges in complexity from easy to coNP-complete @cite_29 .
{ "cite_N": [ "@cite_35", "@cite_29", "@cite_4", "@cite_3" ], "mid": [ "2484891765", "1575960470", "2126893170", "2110172168" ], "abstract": [ "The rapidly growing field of computational social choice, at the intersection of computer science and economics, deals with the computational aspects of collective decision making. This handbook, written by thirty-six prominent members of the computational social choice community, covers the field comprehensively. Chapters devoted to each of the field's major themes offer detailed introductions. Topics include voting theory (such as the computational complexity of winner determination and manipulation in elections), fair allocation (such as algorithms for dividing divisible and indivisible goods), coalition formation (such as matching and hedonic games), and many more. Graduate students, researchers, and professionals in computer science, economics, mathematics, political science, and philosophy will benefit from this accessible and self-contained book.", "We study computational aspects of three prominent voting rules that use approval ballots to select multiple winners. These rules are proportional approval voting, reweighted approval voting, and satisfaction approval voting. Each rule is designed with the intention to compute a representative winning set. We show that computing the winner for proportional approval voting is NP-hard, closing an open problem (Kilgour, 2010). As none of the rules we examine are strategy-proof, we study various strategic aspects of the rules. In particular, we examine the computational complexity of computing a best response for both a single agent and a group of agents. In many settings, we show that it is NP-hard for an agent or agents to compute how best to vote given a fixed set of approval ballots of the other agents.", "Although recent years have seen a surge of interest in the computational aspects of social choice, no specific attention has previously been devoted to elections with multiple winners, e.g., elections of an assembly or committee. In this paper, we characterize the worst-case complexity of manipulation and control in the context of four prominent multiwinner voting systems, under different formulations of the strategic agent's goal.", "Recent work by Procaccia, Rosenschein and Zohar [14] established some results regarding the complexity of manipulation and control in elections with multiple winners, such as elections of an assembly or committee; that work provided an initial understanding of the topic. In this paper, we paint a more complete picture of the topic, investigating four prominent multi-winner voting rules. First, we characterize the complexity of manipulation and control in these voting rules under various kinds of formalizations of the manipulator's goal. Second, we extend the results about complexity of control to various well-known types of control. This work enhances our comprehension of which multi-winner voting rules should be employed in various settings." ] }
1905.12104
2947331922
In many real world situations, collective decisions are made using voting. Moreover, scenarios such as committee or board elections require voting rules that return multiple winners. In multi-winner approval voting (AV), an agent may vote for as many candidates as they wish. Winners are chosen by tallying up the votes and choosing the top- @math candidates receiving the most votes. An agent may manipulate the vote to achieve a better outcome by voting in a way that does not reflect their true preferences. In complex and uncertain situations, agents may use heuristics to strategize, instead of incurring the additional effort required to compute the manipulation which most favors this http URL this paper, we examine voting behavior in multi-winner approval voting scenarios with complete information. We show that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation. Instead, voters tend to prioritize the candidates with the highest utilities. Using simulations, we demonstrate the effectiveness of these heuristics in situations where agents only have access to partial information.
In this paper we begin to move beyond simply simulation and re-purposed data to studying the preferences and decision making behavior of actual decision makers in realistic settings @cite_14 . This goal, of collecting focused data, is perhaps most well known from the field of experimental economics @cite_25 and behavioral game theory @cite_5 . Essentially, we can use payments to individuals to study how they would act in real-world settings. There are a number of best practices for providing appropriate incentives to individuals or over crowd-working platforms to study decision making @cite_18 @cite_19 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_19", "@cite_5", "@cite_25" ], "mid": [ "1984022436", "2268401097", "2170493558", "1713503745", "1626837244" ], "abstract": [ "Amazon’s Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.", "Many mathematical frameworks aim at modeling human preferences, employing a number of methods including utility functions, qualitative preference statements, constraint optimization, and logic formalisms. The choice of one model over another is usually based on the assumption that it can accurately describe the preferences of humans or other subjects processes in the considered setting and is computationally tractable. Verification of these preference models often leverages some form of real life or domain specific data; demonstrating the models can predict the series of choices observed in the past. We argue that this is not enough: to evaluate a preference model, humans must be brought into the loop. Human experiments in controlled environments are needed to avoid common pitfalls associated with exclusively using prior data including introducing bias in the attempt to clean the data, mistaking correlation for causality, or testing data in a context that is different from the one where the data were produced. Human experiments need to be done carefully and we advocate a multi-disciplinary research environment that includes experimental psychologists and AI researchers. We argue that experiments should be used to validate models. We detail the design of an experiment in order to highlight some of the significant computational, conceptual, ethical, mathematical, psychological, and statistical hurdles to testing whether decision makers' preferences are consistent with a particular mathematical model of preferences.", "We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. Finally, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets.", "Game theory, the formalized study of strategy, began in the 1940s by asking how emotionless geniuses should play games, but ignored until recently how average people with emotions and limited foresight actually play games. This book marks the first substantial and authoritative effort to close this gap. Colin Camerer, one of the field's leading figures, uses psychological principles and hundreds of experiments to develop mathematical theories of reciprocity, limited strategizing, and learning, which help predict what real people and companies do in strategic situations. Unifying a wealth of information from ongoing studies in strategic behavior, he takes the experimental science of behavioral economics a major step forward. He does so in lucid, friendly prose. Behavioral game theory has three ingredients that come clearly into focus in this book: mathematical theories of how moral obligation and vengeance affect the way people bargain and trust each other; a theory of how limits in the brain constrain the number of steps of \"I think he thinks . . .\" reasoning people naturally do; and a theory of how people learn from experience to make better strategic decisions. Strategic interactions that can be explained by behavioral game theory include bargaining, games of bluffing as in sports and poker, strikes, how conventions help coordinate a joint activity, price competition and patent races, and building up reputations for trustworthiness or ruthlessness in business or life. While there are many books on standard game theory that address the way ideally rational actors operate, Behavioral Game Theory stands alone in blending experimental evidence and psychology in a mathematical theory of normal strategic behavior. It is must reading for anyone who seeks a more complete understanding of strategic thinking, from professional economists to scholars and students of economics, management studies, psychology, political science, anthropology, and biology.", "This book, which comprises eight chapters, presents a comprehensive critical survey of the results and methods of laboratory experiments in economics. The first chapter provides an introduction to experimental economics as a whole, with the remaining chapters providing surveys by leading practitioners in areas of economics that have seen a concentration of experiments: public goods, coordination problems, bargaining, industrial organization, asset markets, auctions, and individual decision making." ] }
1905.12104
2947331922
In many real world situations, collective decisions are made using voting. Moreover, scenarios such as committee or board elections require voting rules that return multiple winners. In multi-winner approval voting (AV), an agent may vote for as many candidates as they wish. Winners are chosen by tallying up the votes and choosing the top- @math candidates receiving the most votes. An agent may manipulate the vote to achieve a better outcome by voting in a way that does not reflect their true preferences. In complex and uncertain situations, agents may use heuristics to strategize, instead of incurring the additional effort required to compute the manipulation which most favors this http URL this paper, we examine voting behavior in multi-winner approval voting scenarios with complete information. We show that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation. Instead, voters tend to prioritize the candidates with the highest utilities. Using simulations, we demonstrate the effectiveness of these heuristics in situations where agents only have access to partial information.
Three recent papers both address strategic voting under the plurality rule where agents are making decisions in uncertain environments. First, @cite_36 study the voting behavior of agents under the plurality rule with three options. They find that the amount of information available to the voters affects the decision on whether or not to vote strategically and that in many cases the strategic decisions do not affect the outcome of the plurality vote.
{ "cite_N": [ "@cite_36" ], "mid": [ "1533108901" ], "abstract": [ "We theoretically and experimentally study voter behavior in a setting characterized by plurality rule and mandatory voting, where voters choose from three options. We are interested in the occurrence of strategic voting in an environment where Condorcet cycles may occur. In particular, we focus on how information about the distribution of preferences affects strategic behavior. We also vary the relative importance of the second preferred option to investigate how this affects the strategic vote. Quantal response equilibrium analysis is used to analyze the game and proves to be a good predictor for the experimental data. Our results indeed show that strategic voting arises, the extent of which depends on (i) the availability of information; (ii) the relative importance of the intermediate candidate; (iii) the electorate's relative support for one's preferred candidate; and (iv) the relative position of the plurality-supported candidate in a voter's preference ordering. Our results show that information serves as a coordination device where strategic voting does not harm the plurality-preferred candidate's chances of winning." ] }
1905.12104
2947331922
In many real world situations, collective decisions are made using voting. Moreover, scenarios such as committee or board elections require voting rules that return multiple winners. In multi-winner approval voting (AV), an agent may vote for as many candidates as they wish. Winners are chosen by tallying up the votes and choosing the top- @math candidates receiving the most votes. An agent may manipulate the vote to achieve a better outcome by voting in a way that does not reflect their true preferences. In complex and uncertain situations, agents may use heuristics to strategize, instead of incurring the additional effort required to compute the manipulation which most favors this http URL this paper, we examine voting behavior in multi-winner approval voting scenarios with complete information. We show that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation. Instead, voters tend to prioritize the candidates with the highest utilities. Using simulations, we demonstrate the effectiveness of these heuristics in situations where agents only have access to partial information.
There are many other closely related studies of strategic voting and preference reporting in cooperative and noncooperative settings. Recently, developed a large system to look at how users report preferences to an allocation mechanism and studied the strategic behavior of those users; finding that most users were myopic and that searching a small space of accounted for most strategic voting. There is also a series of papers looking at predicting and modeling user behavior in simple games, including voting games @cite_22 @cite_1 @cite_37 . In each of these papers large corpora of humans playing simple games are collected and then models are proposed to fit the observed behavior. Generally speaking, most participants are only strategic in a limited sense and do not look ahead as far into the future as one would expect based on formal game theoretic models @cite_33 .Other recent work with human decision making in cooperative games settings includes weighted voting games @cite_9 and coalition allocation games @cite_20 .
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_33", "@cite_9", "@cite_1", "@cite_20" ], "mid": [ "1948149164", "2061826561", "1970744965", "2740219783", "2552149088", "" ], "abstract": [ "Studies in experimental economics have consistently demonstrated that Nash equilibrium is a poor description of human players' behavior in unrepeated normal-form games. Behavioral game theory offers alternative models that more accurately describe human behavior in these settings. These models typically depend upon the values of exogenous parameters, which are estimated based on experimental data. We describe methods for deriving and analyzing the posterior distributions over the parameters of such models, and apply these techniques to study two popular models (Poisson-CH and QLk), the latter of which we previously showed to be the best-performing existing model in a comparison of four widely-studied behavioral models [22]. Drawing on a large set of publicly available experimental data, we derive concrete recommendations for the parameters that should be used with Poisson-CH, contradicting previous recommendations in the literature. We also uncover anomalies in QLk that lead us to develop a new, simpler, and better-performing family of models.", "Behavioral game theory seeks to describe the way actual people (as compared to idealized, rational'' agents) act in strategic situations. Our own recent work has identified iterative models (such as quantal cognitive hierarchy) as the state of the art for predicting human play in unrepeated, simultaneous-move games [Wright and Leyton-Brown 2012]. Iterative models predict that agents reason iteratively about their opponents, building up from a specification of nonstrategic behavior called level-0. The modeler is in principle free to choose any description of level-0 behavior that makes sense for the given setting; however, in practice almost all existing work specifies this behavior as a uniform distribution over actions. In most games it is not plausible that even nonstrategic agents would choose an action uniformly at random, nor that other agents would expect them to do so. A more accurate model for level-0 behavior has the potential to dramatically improve predictions of human behavior, since a substantial fraction of agents may play level-0 strategies directly, and furthermore since iterative models ground all higher-level strategies in responses to the level-0 strategy. Our work considers meta-models'' of level-0 behavior: models of the way in which level-0 agents construct a probability distribution over actions, given an arbitrary game. We evaluated many such meta-models, each of which makes its prediction based only on general features that can be computed from any normal form game. We evaluated the effects of combining each new level-0 meta-model with various iterative models, and in many cases observed large improvements in the models' predictive accuracies. In the end, we recommend a meta-model that achieved excellent performance across the board: a linear weighting of features that requires the estimation of five weights.", "We suggest a new model for strategic voting based on local dominance, where voters consider a set of possible outcomes without assigning probabilities to them. We prove that voting equilibria under the Plurality rule exist for a broad class of local dominance relations. Furthermore, we show that local dominance-based dynamics quickly converge to an equilibrium if voters start from the truthful state, and we provide weaker convergence guarantees in more general settings. Using extensive simulations of strategic voting on generated and real profiles, we show that emerging equilibria replicate widely known patterns of human voting behavior such as Duverger's law, and that they generally improve the quality of the winner compared to non-strategic voting.", "", "Predicting the behavior of human participants in strategic settings is an important problem in many domains. Most existing work either assumes that participants are perfectly rational, or attempts to directly model each participant's cognitive processes based on insights from cognitive psychology and experimental economics. In this work, we present an alternative, a deep learning approach that automatically performs cognitive modeling without relying on such expert knowledge. We introduce a novel architecture that allows a single network to generalize across different input and output dimensions by using matrix units rather than scalar units, and show that its performance significantly outperforms that of the previous state of the art, which relies on expert-constructed features.", "" ] }
1905.11963
2946960497
A well-known problem in data science and machine learning is linear regression , which is recently extended to dynamic graphs. Existing exact algorithms for updating the solution of dynamic graph regression problem require at least a linear time (in terms of @math : the size of the graph). However, this time complexity might be intractable in practice. In the current paper, we utilize subsampled randomized Hadamard transform and CountSketch to propose the first randomized algorithms. Suppose that we are given an @math matrix embedding @math of the graph, where @math . Let @math be the number of samples required for a guaranteed approximation error, which is a sublinear function of @math . Our first algorithm reduces time complexity of pre-processing to @math . Then after an edge insertion or an edge deletion, it updates the approximate solution in @math time. Our second algorithm reduces time complexity of pre-processing to @math , where @math is the number of nonzero elements of @math . Then after an edge insertion or an edge deletion or a node insertion or a node deletion, it updates the approximate solution in @math time, with @math .
A research problem that may have some connection to our studied problem is learning embeddings or representations for nodes or subgraphs of a graph @cite_6 , @cite_17 , @cite_1 . While this problem has become more attractive in recent years, it dates back to several decades ago. For example, Parsons and Pisanski @cite_10 presented vector embeddings for nodes of a graph such that the inner product of the vector embeddings of any two nodes @math and @math is negative iff @math and @math are connected by an edge; and it is @math otherwise.
{ "cite_N": [ "@cite_10", "@cite_1", "@cite_6", "@cite_17" ], "mid": [ "2095567216", "2406128552", "2962756421", "2315403234" ], "abstract": [ "Abstract Let G be a simple graph with vertices 1, 2, …, n . We consider representations of G by nonzero vectors x 1 ,x 2 ,…,x n ϵ R d such that for i ≠ j the inner product x i · x j is negative or zero according as vertex i is adjacent to or not adjacent to vertex j . The least dimension d necessary for such representations is studied as a function of G and of various restrictions placed upon the coordinates of the vectors and the values of the inner products. In particular, for some d such that d n 2 , there always exists a representation in which all coordinates are from −1,1 and all negative inner products have the same value. Various types of vector representations of graphs are related to Hadamard matrices, families of sets with prescribed intersections, and other combinatorial problems.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embeddings and input feature vectors, while in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, we show improved performance over many of the existing models." ] }
1905.11931
2947356012
Adversarial training is a useful approach to promote the learning of transferable representations across the source and target domains, which has been widely applied for domain adaptation (DA) tasks based on deep neural networks. Until very recently, existing adversarial domain adaptation (ADA) methods ignore the useful information from the label space, which is an important factor accountable for the complicated data distributions associated with different semantic classes. Especially, the inter-class semantic relationships have been rarely considered and discussed in the current work of transfer learning. In this paper, we propose a novel relationship-aware adversarial domain adaptation (RADA) algorithm, which first utilizes a single multi-class domain discriminator to enforce the learning of inter-class dependency structure during domain-adversarial training and then aligns this structure with the inter-class dependencies that are characterized from training the label predictor on the source domain. Specifically, we impose a regularization term to penalize the structure discrepancy between the inter-class dependencies respectively estimated from domain discriminator and label predictor. Through this alignment, our proposed method makes the ADA aware of class relationships. Empirical studies show that the incorporation of class relationships significantly improves the performance on benchmark datasets.
Deep DA methods attempt to generalize the deep neural networks across different domains. The most commonly used approaches are based on discrepancy minimization @cite_15 @cite_3 @cite_6 @cite_19 @cite_13 @cite_20 or adversarial training @cite_7 @cite_0 @cite_30 @cite_12 . Adversarial training, inspired by generative modeling in GANs @cite_16 , is an important approach for deep transfer learning tasks. DANN @cite_33 is proposed with a domain discriminator for classifying whether a sample is from the source or target domains @cite_24 @cite_7 . With a gradient reversal layer (GRL), it promotes the learning of discriminative features for classification, and ensures the learned feature distributions over different domains are similar. Recent works realize the importance of exploiting the complex structure behind the data distributions for DA rather than just aligning the whole source and target distributions @cite_28 @cite_14 . Multi-adversarial domain adaptation (MADA) utilizes the information from the label space by assigning class-wise discriminators to capture the multimodal structure owing to different classes @cite_17 . However, the structure information from label space is unexplored for DA.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_33", "@cite_7", "@cite_28", "@cite_6", "@cite_3", "@cite_0", "@cite_19", "@cite_24", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "", "", "1989946722", "2963826681", "2795155917", "2963275094", "2159291411", "", "", "1731081199", "1565327149", "", "", "2798681837", "", "2788768841" ], "abstract": [ "", "", "Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of task relationships. In particular, we consider a joint estimation problem of the task relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship structure learning component builds on recent advances in structure learning of Gaussian graphical models based on sparse estimators of the precision (inverse covariance) matrix. We illustrate the effectiveness of the proposed model on a variety of synthetic and benchmark datasets for regression and classification. We also consider the problem of combining climate model outputs for better projections of future climate, with focus on temperature in South America, and show that the proposed model outperforms several existing methods for the problem.", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "Adversarial learning has been embedded into deep networks to learn transferable representations for domain adaptation. Existing adversarial domain adaptation methods may struggle to align different domains of multimode distributions that are native in classification problems. In this paper, we present conditional adversarial domain adaptation, a new framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Conditional domain adversarial networks are proposed to enable discriminative adversarial adaptation of multimode domains. Experiments testify that the proposed approaches exceed the state-of-the-art results on three domain adaptation datasets.", "Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being \"frustratingly easy\" to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple–it can be implemented in four lines of Matlab code–CORAL performs remarkably well in extensive evaluations on standard benchmark datasets.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "", "", "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "", "", "In this paper, we propose a new unsupervised domain adaptation approach called Collaborative and Adversarial Network (CAN) through domain-collaborative and domain-adversarial training of neural networks. We add several domain classifiers on multiple CNN feature extraction blocks1, in which each domain classifier is connected to the hidden representations from one block and one loss function is defined based on the hidden presentation and the domain labels (e.g., source and target). We design a new loss function by integrating the losses from all blocks in order to learn domain informative representations from lower blocks through collaborative learning and learn domain uninformative representations from higher blocks through adversarial learning. We further extend our CAN method as Incremental CAN (iCAN), in which we iteratively select a set of pseudo-labelled target samples based on the image classifier and the last domain classifier from the previous training epoch and re-train our CAN model by using the enlarged training set. Comprehensive experiments on two benchmark datasets Office and ImageCLEF-DA clearly demonstrate the effectiveness of our newly proposed approaches CAN and iCAN for unsupervised domain adaptation.", "", "" ] }
1905.11931
2947356012
Adversarial training is a useful approach to promote the learning of transferable representations across the source and target domains, which has been widely applied for domain adaptation (DA) tasks based on deep neural networks. Until very recently, existing adversarial domain adaptation (ADA) methods ignore the useful information from the label space, which is an important factor accountable for the complicated data distributions associated with different semantic classes. Especially, the inter-class semantic relationships have been rarely considered and discussed in the current work of transfer learning. In this paper, we propose a novel relationship-aware adversarial domain adaptation (RADA) algorithm, which first utilizes a single multi-class domain discriminator to enforce the learning of inter-class dependency structure during domain-adversarial training and then aligns this structure with the inter-class dependencies that are characterized from training the label predictor on the source domain. Specifically, we impose a regularization term to penalize the structure discrepancy between the inter-class dependencies respectively estimated from domain discriminator and label predictor. Through this alignment, our proposed method makes the ADA aware of class relationships. Empirical studies show that the incorporation of class relationships significantly improves the performance on benchmark datasets.
Multi-task learning (MTL) seeks to improve the generalization performance by transferring knowledge among related tasks. This knowledge sharing feature makes it possible for learning the structure among tasks, so structure learning, which studies how to accurately characterize the task relationships, has become a central issue of MTL @cite_33 @cite_25 .As one of the earliest MTL models, DNNs also share certain commonalities (neurons of the hidden layer) among the neurons of the output layers @cite_27 @cite_23 . Inspired by the methods explicitly modeling task relationships in MTL @cite_8 @cite_34 , recent studies for multi-class classification using CNNs exploit and harness the inter-class relationships through imposing a regularization, which has been successfully validated for improving the video categorization performance @cite_23 .
{ "cite_N": [ "@cite_33", "@cite_8", "@cite_27", "@cite_23", "@cite_34", "@cite_25" ], "mid": [ "1989946722", "2401583556", "", "1777628566", "2964121793", "" ], "abstract": [ "Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of task relationships. In particular, we consider a joint estimation problem of the task relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship structure learning component builds on recent advances in structure learning of Gaussian graphical models based on sparse estimators of the precision (inverse covariance) matrix. We illustrate the effectiveness of the proposed model on a variety of synthetic and benchmark datasets for regression and classification. We also consider the problem of combining climate model outputs for better projections of future climate, with focus on temperature in South America, and show that the proposed model outperforms several existing methods for the problem.", "Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of tasks relationship. In particular, we consider a joint estimation problem of the tasks relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship revealed by structure learning is founded on recent advances in Gaussian graphical models endowed with sparse estimators of the precision (inverse covariance) matrix. An extension to include exible Gaussian copula models that relaxes the Gaussian marginal assumption is also proposed. We illustrate the effectiveness of the proposed model on a variety of synthetic and benchmark data sets for regression and classification. We also consider the problem of combining Earth System Model (ESM) outputs for better projections of future climate, with focus on projections of temperature by combining ESMs in South and North America, and show that the proposed model outperforms several existing methods for the problem.", "", "In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.", "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.", "" ] }
1905.12028
2947175736
Image alignment across domains has recently become one of the realistic and popular topics in the research community. In this problem, a deep learning-based image alignment method is usually trained on an available largescale database. During the testing steps, this trained model is deployed on unseen images collected under different camera conditions and modalities. The delivered deep network models are unable to be updated, adapted or fine-tuned in these scenarios. Thus, recent deep learning techniques, e.g. domain adaptation, feature transferring, and fine-tuning, are unable to be deployed. This paper presents a novel deep learning based approach to tackle the problem of across unseen modalities. The proposed network is then applied to image alignment as an illustration. The proposed approach is designed as an end-to-end deep convolutional neural network to optimize the deep models to improve the performance. The proposed network has been evaluated in digit recognition when the model is trained on MNIST and then tested on unseen domain MNIST-M. Finally, the proposed method is benchmarked in image alignment problem when training on RGB images and testing on Depth and X-Ray images.
Image alignment for medical images has been early developed. They are feature-based methods, e.g. SIFT @cite_17 , SURF @cite_15 , popularly adopted for general computer vision applications, especially in image alignment. Xiahai et. al. @cite_6 presented a conception-normal vector information to evaluate the similarity between two images. Jingfan et. al. @cite_9 introduced an unsupervised image alignment for medical images. The registration network is trained based on feedbacks from the discrimination network designed to judge whether a pair of registered images is aligned.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_6", "@cite_17" ], "mid": [ "2890761445", "", "2148091162", "2151103935" ], "abstract": [ "This paper introduces an unsupervised adversarial similarity network for image registration. Unlike existing deep learning registration frameworks, our approach does not require ground-truth deformations and specific similarity metrics. We connect a registration network and a discrimination network with a deformable transformation layer. The registration network is trained with feedback from the discrimination network, which is designed to judge whether a pair of registered images are sufficiently similar. Using adversarial training, the registration network is trained to predict deformations that are accurate enough to fool the discrimination network. Experiments on four brain MRI datasets indicate that our method yields registration performance that is promising in both accuracy and efficiency compared with state-of-the-art registration methods, including those based on deep learning.", "", "In this paper, a new approach on image registration is presented. We introduce a novel conception- normal vector information (NVI) – to evaluate the similarity between two images. NVI method takes advantage of the relationship between voxels in the image to extract the normal vector (NV) information of each voxel. Firstly, NVI criterion is presented. Then, based on the criterion, we find that NVI related metric has a quite perfect global optimal value on transformation parameter ranges. Finally, registration examples which are based on NVI criterion are provided. The result implies that the feature of smooth value distribution and one global optimal value that NVI metric has makes the optimization procedure much easier to be implemented in image registration.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance." ] }
1905.12028
2947175736
Image alignment across domains has recently become one of the realistic and popular topics in the research community. In this problem, a deep learning-based image alignment method is usually trained on an available largescale database. During the testing steps, this trained model is deployed on unseen images collected under different camera conditions and modalities. The delivered deep network models are unable to be updated, adapted or fine-tuned in these scenarios. Thus, recent deep learning techniques, e.g. domain adaptation, feature transferring, and fine-tuning, are unable to be deployed. This paper presents a novel deep learning based approach to tackle the problem of across unseen modalities. The proposed network is then applied to image alignment as an illustration. The proposed approach is designed as an end-to-end deep convolutional neural network to optimize the deep models to improve the performance. The proposed network has been evaluated in digit recognition when the model is trained on MNIST and then tested on unseen domain MNIST-M. Finally, the proposed method is benchmarked in image alignment problem when training on RGB images and testing on Depth and X-Ray images.
has recently become one of the most popular research topics in computer vision and machine learning @cite_13 @cite_0 @cite_16 @cite_10 @cite_14 . @cite_14 proposed a framework for Unsupervised domain adaptation based on adversarial learning objectives. @cite_8 presented Coupled Generative Adversarial Network to learn a joint distribution of multi-domain images. @cite_13 proposed to incorporate both classification and domain adaptation to a unified network so that both tasks can be learned together.
{ "cite_N": [ "@cite_13", "@cite_14", "@cite_8", "@cite_0", "@cite_16", "@cite_10" ], "mid": [ "2963826681", "2593768305", "2963784072", "2953226914", "2551835155", "1565327149" ], "abstract": [ "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.", "We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions. Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin. We will make our learned models as well as the source code available immediately upon acceptance.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task." ] }
1905.12028
2947175736
Image alignment across domains has recently become one of the realistic and popular topics in the research community. In this problem, a deep learning-based image alignment method is usually trained on an available largescale database. During the testing steps, this trained model is deployed on unseen images collected under different camera conditions and modalities. The delivered deep network models are unable to be updated, adapted or fine-tuned in these scenarios. Thus, recent deep learning techniques, e.g. domain adaptation, feature transferring, and fine-tuning, are unable to be deployed. This paper presents a novel deep learning based approach to tackle the problem of across unseen modalities. The proposed network is then applied to image alignment as an illustration. The proposed approach is designed as an end-to-end deep convolutional neural network to optimize the deep models to improve the performance. The proposed network has been evaluated in digit recognition when the model is trained on MNIST and then tested on unseen domain MNIST-M. Finally, the proposed method is benchmarked in image alignment problem when training on RGB images and testing on Depth and X-Ray images.
Domain Generalization aims to learn an adversarially robust model that can well generalize in the most testing scenarios. M. @cite_3 proposed Multi-Task Autoencoder that learns to transform the original image into analogs in multiple related domains and makes features more robust to variations across domains. Meanwhile, MMD-AAE @cite_7 tries to learn a feature representation by jointly optimizing a multi-domain autoencoder regularized via the Maximum Mean Discrepancy (MMD) distance. K. @cite_18 proposed a kernel-based algorithm for minimizing the differences in the marginal distributions of multiple domains. Y. @cite_4 presented an end-to-end conditional invariant deep domain generalization approach to leverage deep neural networks for domain-invariant representation learning. R. introduced Adversarial Data Augmentation (ADA) @cite_1 to generalize to unseen domains.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_1", "@cite_3" ], "mid": [ "2155858138", "2894728917", "2798658180", "2962935454", "1920962657" ], "abstract": [ "This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.", "Domain generalization aims to learn a classification model from multiple source domains and generalize it to unseen target domains. A critical problem in domain generalization involves learning domain-invariant representations. Let X and Y denote the features and the labels, respectively. Under the assumption that the conditional distribution P(Y|X) remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation T(X) by minimizing the discrepancy of the marginal distribution P(T(X)). However, such an assumption of stable P(Y|X) does not necessarily hold in practice. In addition, the representation learning function T(X) is usually constrained to a simple linear transformation or shallow networks. To address the above two drawbacks, we propose an end-to-end conditional invariant deep domain generalization approach by leveraging deep neural networks for domain-invariant representation learning. The domain-invariance property is guaranteed through a conditional invariant adversarial network that can learn domain-invariant representations w.r.t. the joint distribution P(T(X), Y) if the target domain data are not severely class unbalanced. We perform various experiments to demonstrate the effectiveness of the proposed method.", "In this paper, we tackle the problem of domain generalization: how to learn a generalized feature representation for an \"unseen\" target domain by taking the advantage of multiple seen source-domain data. We present a novel framework based on adversarial autoencoders to learn a generalized latent feature representation across domains for domain generalization. To be specific, we extend adversarial autoencoders by imposing the Maximum Mean Discrepancy (MMD) measure to align the distributions among different domains, and matching the aligned distribution to an arbitrary prior distribution via adversarial feature learning. In this way, the learned feature representation is supposed to be universal to the seen source domains because of the MMD regularization, and is expected to generalize well on the target domain because of the introduction of the prior distribution. We proposed an algorithm to jointly train different components of our proposed framework. Extensive experiments on various vision tasks demonstrate that our proposed framework can learn better generalized features for the unseen target domain compared with state-of-the-art domain generalization methods.", "We are concerned with learning models that generalize well to different unseen domains. We consider a worst-case formulation over data distributions that are near the source domain in the feature space. Only using training data from the source domain, we propose an iterative procedure that augments the dataset with examples from a fictitious target domain that is \"hard\" under the current model. We show that our iterative scheme is an adaptive data augmentation method where we append adversarial examples at each iteration. For softmax losses, we show that our method is a data-dependent regularization scheme that behaves differently from classical regularizers (e.g., ridge or lasso) that regularize towards zero. On digit recognition and semantic segmentation tasks, we empirically observe that our method learns models that improve performance across a priori unknown data distributions.", "The problem of domain generalization is to take knowledge acquired from a number of related domains, where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. The algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization." ] }
1905.11903
2947054175
We present an analysis of embeddings extracted from different pre-trained models for content-based image retrieval. Specifically, we study embeddings from image classification and object detection models. We discover that even with additional human annotations such as bounding boxes and segmentation masks, the discriminative power of the embeddings based on modern object detection models is significantly worse than their classification counterparts for the retrieval task. At the same time, our analysis also unearths that object detection model can help retrieval task by acting as a hard attention module for extracting object embeddings that focus on salient region from the convolutional feature map. In order to efficiently extract object embeddings, we introduce a simple guided student-teacher training paradigm for learning discriminative embeddings within the object detection framework. We support our findings with strong experimental results.
Object detection aims to detect different objects in an input image. Girshick al @cite_33 proposed one of the first deep learning based object detection models, R-CNN, which improved the accuracy significantly compared to traditional methods @cite_47 @cite_48 @cite_25 . Since then many enhancements @cite_5 @cite_36 @cite_30 @cite_57 have been made to improve accuracy as well as the training inference time. A comprehensive survey of recent deep learning based object detection methods can be found in @cite_67 . By taking advantage of recent success in object detection, our model can learn discriminative object-level embeddings for image retrieval. Most recently, Teichmann al @cite_14 utilized a specialized landmark detection model to aggregate deep local features @cite_60 for landmark retrieval. Object detection has also been used to improve the performance of other vision tasks such as visual question answering @cite_49 .
{ "cite_N": [ "@cite_30", "@cite_67", "@cite_14", "@cite_33", "@cite_60", "@cite_36", "@cite_48", "@cite_57", "@cite_49", "@cite_5", "@cite_47", "@cite_25" ], "mid": [ "", "2890715498", "2949590848", "2102605133", "2964157791", "", "", "", "2745461083", "2613718673", "2161969291", "" ], "abstract": [ "", "Object detection, one of the most fundamental and challenging problems in computer vision, seeks to locate object instances from a large number of predefined categories in natural images. Deep learning techniques have emerged as a powerful strategy for learning feature representations directly from data and have led to remarkable breakthroughs in the field of generic object detection. Given this period of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in this field brought about by deep learning techniques. More than 300 research contributions are included in this survey, covering many aspects of generic object detection: detection frameworks, object feature representation, object proposal generation, context modeling, training strategies, and evaluation metrics. We finish the survey by identifying promising directions for future research.", "Retrieving object instances among cluttered scenes efficiently requires compact yet comprehensive regional image representations. Intuitively, object semantics can help build the index that focuses on the most relevant regions. However, due to the lack of bounding-box datasets for objects of interest among retrieval benchmarks, most recent work on regional representations has focused on either uniform or class-agnostic region selection. In this paper, we first fill the void by providing a new dataset of landmark bounding boxes, based on the Google Landmarks dataset, that includes @math images with manually curated boxes from @math unique landmarks. Then, we demonstrate how a trained landmark detector, using our new dataset, can be leveraged to index image regions and improve retrieval accuracy while being much more efficient than existing regional methods. In addition, we introduce a novel regional aggregated selective match kernel (R-ASMK) to effectively combine information from detected regions into an improved holistic image representation. R-ASMK boosts image retrieval accuracy substantially with no dimensionality increase, while even outperforming systems that index image regions independently. Our complete image retrieval system improves upon the previous state-of-the-art by significant margins on the Revisited Oxford and Paris datasets. Code and data available at the project webpage: this https URL.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature). The new feature is based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. To identify semantically useful local features for image retrieval, we also propose an attention mechanism for key point selection, which shares most network layers with the descriptor. This frame-work can be used for image retrieval as a drop-in replacement for other keypoint detectors and descriptors, enabling more accurate feature matching and geometric verification. Our system produces reliable confidence scores to reject false positives–in particular, it is robust against queries that have no correct match in the database. To evaluate the proposed descriptor, we introduce a new large-scale dataset, referred to as Google-Landmarks dataset, which involves challenges in both database and query such as background clutter, partial occlusion, multiple landmarks, objects in variable scales, etc. We show that DELE outperforms the state-of-the-art global and local descriptors in the large-scale setting by significant margins.", "", "", "", "Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "" ] }
1905.11691
2946851410
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes in a (knowledge) graph. To the best of our knowledge, none of them has tackled the problem of embedding of graph edges, that is, knowledge graph triples. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the endpoint nodes. The goal of this paper is to introduce Triple2Vec, a new technique to directly embed edges in (knowledge) graphs. Trple2Vec builds upon three main ingredients. The first is the notion of line graph. The line graph of a graph is another graph representing the adjacency between edges of the original graph. In particular, the nodes of the line graph are the edges of the original graph. We show that directly applying existing embedding techniques on the nodes of the line graph to learn edge embeddings is not enough in the context of knowledge graphs. Thus, we introduce the notion of triple line graph. The second is an edge weighting mechanism both for line graphs derived from knowledge graphs and homogeneous graphs. The third is a strategy based on graph walks on the weighted triple line graph that can preserve proximity between nodes. Embeddings are finally generated by adopting the SkipGram model, where sentences are replaced with graph walks. We evaluate our approach on different real world (knowledge) graphs and compared it with related work.
Another strand of research has focused on heterogeneous graphs, where nodes and edges can have different types @cite_11 @cite_17 @cite_14 @cite_2 . Here, the random walk generation for the Skip-gram model has been adapted to consider nodes and edge types. RDF2Vec @cite_2 focuses on computing node embeddings by using the continuous bag of words or a Skip-gram model. It computes two kinds of walks: subtrees up to a fixed depth @math and breadth-first search walks(by uniformly sampling the nodes on the walks among the neighbors). metapath2vec @cite_11 uses metapaths to guide the generation of walks, but it also proposes to use heterogeneous negative samples in the Skip-gram model for learning latent vectors of nodes. Hin2vec @cite_17 is an evolution of metapath2vec, which considers multiple metapaths. JUST @cite_14 provides a sampling strategy that balances both the presence of homogeneous, heterogeneous edges and the node distribution over different domains (i.e., node types) in the generated walks. Our approach is also different from TransE @cite_18 and its variants, the goal of which is to learn knowledge graph embeddings to perform link prediction by providing both positive and negative input facts.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_11", "@cite_2", "@cite_17" ], "mid": [ "2127795553", "2896161497", "2743104969", "2523679382", "2767774008" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "The graph embedding paradigm projects nodes of a graph into a vector space, which can facilitate various downstream graph analysis tasks such as node classification and clustering. To efficiently learn node embeddings from a graph, graph embedding techniques usually preserve the proximity between node pairs sampled from the graph using random walks. In the context of a heterogeneous graph, which contains nodes from different domains, classical random walks are biased towards highly visible domains where nodes are associated with a dominant number of paths. To overcome this bias, existing heterogeneous graph embedding techniques typically rely on meta-paths (i.e., fixed sequences of node types) to guide random walks. However, using these meta-paths either requires prior knowledge from domain experts for optimal meta-path selection, or requires extended computations to combine all meta-paths shorter than a predefined length. In this paper, we propose an alternative solution that does not involve any meta-path. Specifically, we propose JUST, a heterogeneous graph embedding technique using random walks with JUmp and STay strategies to overcome the aforementioned bias in an more efficient manner. JUST can not only gracefully balance between homogeneous and heterogeneous edges, it can also balance the node distribution over different domains (i.e., node types). By conducting a thorough empirical evaluation of our method on three heterogeneous graph datasets, we show the superiority of our proposed technique. In particular, compared to a state-of-the-art heterogeneous graph embedding technique Hin2vec, which tries to optimally combine all meta-paths shorter than a predefined length, our technique yields better results in most experiments, with a dramatically reduced embedding learning time (about 3x speedup).", "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.", "Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph sub-structures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.", "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6 to 23.8 of @math - @math in multi-label node classification and 5 to 70.8 of @math in link prediction." ] }
1905.11669
2947859815
Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved great progress in many real-life applications. Meanwhile, due to the complex model structures against strict latency and memory restriction, the implementation of CNN models on the resource-limited platforms is becoming more challenging. This work proposes a solution, called CompactNet Project URL: this https URL , which automatically optimizes a pre-trained CNN model on a specific resource-limited platform given a specific target of inference speedup. Guided by a simulator of the target platform, CompactNet progressively trims a pre-trained network by removing certain redundant filters until the target speedup is reached and generates an optimal platform-specific model while maintaining the accuracy. We evaluate our work on two platforms of a mobile ARM CPU and a machine learning accelerator NPU (Cambricon-1A ISA) on a Huawei Mate10 smartphone. For the state-of-the-art slim CNN model made for the embedded platform, MobileNetV2, CompactNet achieves up to a 1.8x kernel computation speedup with equal or even higher accuracy for image classification tasks on the Cifar-10 dataset.
In recent years, a large number of works @cite_16 aiming to optimize CNN models have achieved great success. Most of the works can be divided into two main categories.
{ "cite_N": [ "@cite_16" ], "mid": [ "2787884921" ], "abstract": [ "Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions." ] }
1905.11669
2947859815
Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved great progress in many real-life applications. Meanwhile, due to the complex model structures against strict latency and memory restriction, the implementation of CNN models on the resource-limited platforms is becoming more challenging. This work proposes a solution, called CompactNet Project URL: this https URL , which automatically optimizes a pre-trained CNN model on a specific resource-limited platform given a specific target of inference speedup. Guided by a simulator of the target platform, CompactNet progressively trims a pre-trained network by removing certain redundant filters until the target speedup is reached and generates an optimal platform-specific model while maintaining the accuracy. We evaluate our work on two platforms of a mobile ARM CPU and a machine learning accelerator NPU (Cambricon-1A ISA) on a Huawei Mate10 smartphone. For the state-of-the-art slim CNN model made for the embedded platform, MobileNetV2, CompactNet achieves up to a 1.8x kernel computation speedup with equal or even higher accuracy for image classification tasks on the Cifar-10 dataset.
First, lots of famous works adopt pruning techniques @cite_20 @cite_24 . These approaches focus on removing the redundant weights to sparsify the filters in the model. And they can be further divided into weight-level @cite_6 @cite_2 , vector-level @cite_19 , kernel-level @cite_1 and group-level @cite_12 @cite_13 . Unfortunately, not all the platform can fully take advantage of such sparse data structure @cite_17 and therefore, there is no guarantee on reducing the latency. Other works @cite_10 @cite_9 , in contrast, consider removing entire filters, which shows a more conspicuous speedup. The main issue of these approaches is that they are not automatic or platform-aware. It means that the number of removed filters needs to be set manually since different backend platforms may have different optimal options. ADC @cite_22 proposes using reinforcement learning and MorphNet @cite_3 leverages the sparsifying regularizers to decide the compression rates. AdaptNet @cite_11 uses direct metrics as guides for adapting DL models to mobile devices given a specific resource budget. Our CompactNet addresses the same issue in a different way by removing certain redundant filters according to the simulated latency data of the backend platforms in order to satisfy the target speedup.
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_10", "@cite_9", "@cite_1", "@cite_6", "@cite_3", "@cite_24", "@cite_19", "@cite_2", "@cite_13", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2797480241", "2785761580", "2963363373", "2964233199", "2276892413", "2963981420", "2769900398", "", "2619096655", "2964299589", "2963000224", "566555209", "2963674932", "2657126969" ], "abstract": [ "This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7 @math speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).", "Model compression is an effective technique facilitating the deployment of neural network models on mobile devices that have limited computation resources and a tight power budget. However, conventional model compression techniques use hand-crafted features and require domain experts to explore the large design space trading off model size, speed, and accuracy, which is usually sub-optimal and time-consuming. In this paper, we propose Automated Deep Compression (ADC) that leverages reinforcement learning in order to efficiently sample the design space and greatly improve the model compression quality. We achieved state-of-the-art model compression results in a fully automated way without any human efforts. Under 4x FLOPs reduction, we achieved 2.7 better accuracy than hand-crafted model compression method for VGG-16 on ImageNet. We applied this automated, push-the-button compression pipeline to MobileNet and achieved a 2x reduction in FLOPs, and a speedup of 1.49x on Titan Xp and 1.65x on an Android phone (Samsung Galaxy S7), with negligible loss of accuracy.", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.", "We propose an efficient and unified framework, namely ThiNet, to simultaneously accelerate and compress CNN models in both training and inference stages. We focus on the filter level pruning, i.e., the whole filter would be discarded if it is less important. Our method does not change the original network structure, thus it can be perfectly supported by any off-the-shelf deep learning libraries. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. Experimental results demonstrate the effectiveness of this strategy, which has advanced the state-of-the-art. We also show the performance of ThiNet on ILSVRC-12 benchmark. ThiNet achieves 3.31 x FLOPs reduction and 16.63× compression on VGG-16, with only 0.52 top-5 accuracy drop. Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1 top-5 accuracy drop. Moreover, the original VGG-16 model can be further pruned into a very small model with only 5.05MB model size, preserving AlexNet level accuracy but showing much stronger generalization ability.", "Real-time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks: feature map-wise, kernel-wise, and intra-kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, in parallel computing environments, and in hardware-based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by assessing the misclassification rate with a corresponding connectivity pattern. The pruned network is retrained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra-kernel strided sparsity with a simple constraint can significantly reduce the size of the kernel and feature map tensors. The proposed work shows that when pruning granularities are applied in combination, we can prune the CIFAR-10 network by more than 70 with less than a 1 loss in accuracy.", "Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108x and 17.7x respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https: github.com yiwenguo Dynamic-Network-Surgery.", "We present MorphNet, an approach to automate the design of neural network structures. MorphNet iteratively shrinks and expands a network, shrinking via a resource-weighted sparsifying regularizer on activations and expanding via a uniform multiplicative factor on all layers. In contrast to previous approaches, our method is scalable to large networks, adaptable to specific resource constraints (e.g. the number of floating-point operations per inference), and capable of increasing the network's performance. When applied to standard network architectures on a wide variety of datasets, our approach discovers novel structures in each domain, obtaining higher performance while respecting the resource constraint.", "", "Sparsity helps reduce the computational complexity of deep neural networks by skipping zeros. Taking advantage of sparsity is listed as a high priority in next generation DNN accelerators such as TPU. The structure of sparsity, i.e., the granularity of pruning, affects the efficiency of hardware accelerator design as well as the prediction accuracy. Coarse-grained pruning creates regular sparsity patterns, making it more amenable for hardware acceleration but more challenging to maintain the same accuracy. In this paper we quantitatively measure the trade-off between sparsity regularity and prediction accuracy, providing insights in how to maintain accuracy while having more a more structured sparsity pattern. Our experimental results show that coarse-grained pruning can achieve a sparsity ratio similar to unstructured pruning without loss of accuracy. Moreover, due to the index saving effect, coarse-grained pruning is able to obtain a better compression ratio than fine-grained sparsity at the same accuracy threshold. Based on the recent sparse convolutional neural network accelerator (SCNN), our experiments further demonstrate that coarse-grained sparsity saves about 2x the memory references compared to fine-grained sparsity. Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.", "Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 .", "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights. However, we implemented weight pruning for several popular networks on a variety of hardware platforms and observed surprising results. For many networks, the network sparsity caused by weight pruning will actually hurt the overall performance despite large reductions in the model size and required multiply-accumulate operations. Also, encoding the sparse format of pruned networks incurs additional storage space overhead. To overcome these challenges, we propose Scalpel that customizes DNN pruning to the underlying hardware by matching the pruned network structure to the data-parallel hardware organization. Scalpel consists of two techniques: SIMD-aware weight pruning and node pruning. For low-parallelism hardware (e.g., microcontroller), SIMD-aware weight pruning maintains weights in aligned fixed-size groups to fully utilize the SIMD units. For high-parallelism hardware (e.g., GPU), node pruning removes redundant nodes, not redundant weights, thereby reducing computation without sacrificing the dense matrix format. For hardware with moderate parallelism (e.g., desktop CPU), SIMD-aware weight pruning and node pruning are synergistically applied together. Across the microcontroller, CPU and GPU, Scalpel achieves mean speedups of 3.54x, 2.61x, and 1.25x while reducing the model sizes by 88 , 82 , and 53 . In comparison, traditional weight pruning achieves mean speedups of 1.90x, 1.06x, 0.41x across the three platforms." ] }
1905.11669
2947859815
Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved great progress in many real-life applications. Meanwhile, due to the complex model structures against strict latency and memory restriction, the implementation of CNN models on the resource-limited platforms is becoming more challenging. This work proposes a solution, called CompactNet Project URL: this https URL , which automatically optimizes a pre-trained CNN model on a specific resource-limited platform given a specific target of inference speedup. Guided by a simulator of the target platform, CompactNet progressively trims a pre-trained network by removing certain redundant filters until the target speedup is reached and generates an optimal platform-specific model while maintaining the accuracy. We evaluate our work on two platforms of a mobile ARM CPU and a machine learning accelerator NPU (Cambricon-1A ISA) on a Huawei Mate10 smartphone. For the state-of-the-art slim CNN model made for the embedded platform, MobileNetV2, CompactNet achieves up to a 1.8x kernel computation speedup with equal or even higher accuracy for image classification tasks on the Cifar-10 dataset.
Another way to optimize CNN models is focusing on network structure. MobileNets @cite_14 @cite_15 SqueezeNet @cite_0 and @cite_27 are typical examples of this kind. They are all general designs to build more efficient CNN models by removing the FC layer, using multiple group convolution or proposing depth-wise convolution. There’s no doubt that such works have achieved great success in saving resources and reducing latency. However, they are not designed for specific platforms and our experiments show that deployed on different backend platforms, they still have a significant optimizing space for kernel computation speedup via our CompactNet.
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_15", "@cite_14" ], "mid": [ "2279098554", "2963125010", "2963163009", "2612445135" ], "abstract": [ "Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).", "We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8 ) than recent MobileNet [12] on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves 13A— actual speedup over AlexNet while maintaining comparable accuracy.", "In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization." ] }
1905.11669
2947859815
Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved great progress in many real-life applications. Meanwhile, due to the complex model structures against strict latency and memory restriction, the implementation of CNN models on the resource-limited platforms is becoming more challenging. This work proposes a solution, called CompactNet Project URL: this https URL , which automatically optimizes a pre-trained CNN model on a specific resource-limited platform given a specific target of inference speedup. Guided by a simulator of the target platform, CompactNet progressively trims a pre-trained network by removing certain redundant filters until the target speedup is reached and generates an optimal platform-specific model while maintaining the accuracy. We evaluate our work on two platforms of a mobile ARM CPU and a machine learning accelerator NPU (Cambricon-1A ISA) on a Huawei Mate10 smartphone. For the state-of-the-art slim CNN model made for the embedded platform, MobileNetV2, CompactNet achieves up to a 1.8x kernel computation speedup with equal or even higher accuracy for image classification tasks on the Cifar-10 dataset.
Besides, other approaches based on low-rank approximation @cite_18 @cite_7 use matrix decomposition to reduce the number of operations. The motivation behind such decomposition is to find an approximate matrix that substitutes the original weights. And others like @cite_23 @cite_21 focus on the data type and significantly reduce the latency by quantization. All those works are stand-alone optimization and can be considered as complements to our CompactNet.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_23", "@cite_7" ], "mid": [ "1996901117", "1724438581", "2260663238", "2962988160" ], "abstract": [ "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5× speedup with no loss in accuracy, and 4.5× speedup with less than 1 drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CNNs on resource limited hardware, especially cell phones or other embedded devices. In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs. In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Simply applying k-means clustering to the weights or conducting product quantization can lead to a very good balance between model size and recognition accuracy. For the 1000-category classification task in the ImageNet challenge, we are able to achieve 16-24 times compression of the network with only 1 loss of classification accuracy using the state-of-the-art CNN.", "", "Abstract: Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging. To deploy deep CNNs on mobile devices, we present a simple and effective scheme to compress the entire CNN, which we call one-shot whole network compression. The proposed scheme consists of three steps: (1) rank selection with variational Bayesian matrix factorization, (2) Tucker decomposition on kernel tensor, and (3) fine-tuning to recover accumulated loss of accuracy, and each step can be easily implemented using publicly available tools. We demonstrate the effectiveness of the proposed scheme by testing the performance of various compressed CNNs (AlexNet, VGGS, GoogLeNet, and VGG-16) on the smartphone. Significant reductions in model size, runtime, and energy consumption are obtained, at the cost of small loss in accuracy. In addition, we address the important implementation level issue on 1?1 convolution, which is a key operation of inception module of GoogLeNet as well as CNNs compressed by our proposed scheme." ] }
1905.11873
2953225812
As the size and source of network traffic increase, so does the challenge of monitoring and analyzing network traffic. Therefore, sampling algorithms are often used to alleviate these scalability issues. However, the use of high entropy data streams, through the use of either encryption or compression, further compounds the challenge as current state-of-the-art algorithms cannot accurately and efficiently differentiate between encrypted and compressed packets. In this paper, we propose a novel traffic classification method named High Entropy DistinGuishEr (HEDGE) to distinguish between compressed and encrypted traffic. HEDGE is based on the evaluation of the randomness of the data streams and can be applied to individual packets without the need to have access to the entire stream. The findings from the evaluation show that our approach outperforms current state of the art. We also make available our statistically sound dataset, based on known benchmarks, to the wider research community.
In network traffic analysis approaches, data streams are analysed and classified according to flow and packet information, payload content and statistics (including among others packet size and average times). Based on such information, we can distinguish between: (i) techniques, which determine a set of characteristics features regarding the traffic flow as well as internal packet information and its structure (i.e. it differs depending on the protocol used), and (ii) techniques, which inspect the data payload content and apply relevant methods to extract statistical properties and identify, for example, the type of information being transmitted (e.g. text, image, video, music, compressed or encrypted). We refer the interested reader to @cite_22 @cite_18 for a classification of network traffic analysis models based on input data, applied techniques applied and their corresponding outcomes.
{ "cite_N": [ "@cite_18", "@cite_22" ], "mid": [ "2468337398", "2612362135" ], "abstract": [ "With the widespread use of encrypted data transport, network traffic encryption is becoming a standard nowadays. This presents a challenge for traffic measurement, especially for analysis and anomaly detection methods, which are dependent on the type of network traffic. In this paper, we survey existing approaches for classification and analysis of encrypted traffic. First, we describe the most widespread encryption protocols used throughout the Internet. We show that the initiation of an encrypted connection and the protocol structure give away much information for encrypted traffic classification and analysis. Then, we survey payload and feature-based classification methods for encrypted traffic and categorize them using an established taxonomy. The advantage of some of described classification methods is the ability to recognize the encrypted application protocol in addition to the encryption protocol. Finally, we make a comprehensive comparison of the surveyed feature-based classification methods and present their weaknesses and strengths. Copyright © 2015 John Wiley & Sons, Ltd.", "Identifying Internet traffic applications is essential for network security and management. The steady emergence of new Internet applications, together with the use of encryption and obfuscation techniques, ensures that traffic classification remains a hot research topic. Much research has been devoted to this topic by the research community in the last decade. However, an optimal traffic classification model has yet to be defined. Many techniques and formats have been described, with the current literature therefore lacking appropriate benchmarks expressed in a consistent terminology. Moreover, existing surveys are outdated and do not include many recent advances in the field. In this article, we present a systematic multilevel taxonomy that covers a broad range of existing and recently proposed methods, together with examples of vendor classification techniques. Our taxonomy assists in defining a consistent terminology. It could be useful in future benchmarking contexts by characterizing and comparing methods at three different levels. From this perspective, we describe key features and provide design hints for future classification models, while emphasizing the main requirements for promoting future research efforts. To motivate researchers and other interested parties, we collect and share data captured from real traffic, using two models to protect data privacy. Copyright © 2014 John Wiley & Sons, Ltd." ] }
1905.11873
2953225812
As the size and source of network traffic increase, so does the challenge of monitoring and analyzing network traffic. Therefore, sampling algorithms are often used to alleviate these scalability issues. However, the use of high entropy data streams, through the use of either encryption or compression, further compounds the challenge as current state-of-the-art algorithms cannot accurately and efficiently differentiate between encrypted and compressed packets. In this paper, we propose a novel traffic classification method named High Entropy DistinGuishEr (HEDGE) to distinguish between compressed and encrypted traffic. HEDGE is based on the evaluation of the randomness of the data streams and can be applied to individual packets without the need to have access to the entire stream. The findings from the evaluation show that our approach outperforms current state of the art. We also make available our statistically sound dataset, based on known benchmarks, to the wider research community.
Encrypted data is known to be more uniformly distributed than unencrypted data @cite_20 @cite_53 . Therefore, a number of approaches rely on this characteristic to locate cryptographic keys stored in memory and file system dumps @cite_42 . The authors generally analysed big data streams, divided them into small blocks, and computed their entropy. Therefore, high entropy blocks may indicate the presence of encrypted data. Similarly, the authors in @cite_27 and @cite_1 first computed the entropy of packet payloads and then compared it with the entropy of uniformly randomly distributed sequences of the same length. However, the entropy estimation approach is not effective when the number of samples is small @cite_44 @cite_5 . Moreover, entropy measures are not reliable when other data which has high entropy is present such as compressed, MP3 or PDF files.
{ "cite_N": [ "@cite_53", "@cite_42", "@cite_1", "@cite_44", "@cite_27", "@cite_5", "@cite_20" ], "mid": [ "2109394932", "2320415261", "2114552878", "2114771311", "", "2000163531", "56231138" ], "abstract": [ "THE problems of cryptography and secrecy systems furnish an interesting application of communication theory.1 In this paper a theory of secrecy systems is developed. The approach is on a theoretical level and is intended to complement the treatment found in standard works on cryptography.2 There, a detailed study is made of the many standard types of codes and ciphers, and of the ways of breaking them. We will be more concerned with the general mathematical structure and properties of secrecy systems.", "Targeted cancer therapies have shown promising results in patients, but few of these drugs provide long-term benefits because tumor cells rapidly develop drug resistance. . show that glioblastoma cells can become resistant to erlotinib, an epidermal growth factor receptor (EGFR)–targeted drug, by eliminating extrachromosomal copies of the mutant EGFR gene. After a period of drug withdrawal, the mutant EGFR gene reappears on extrachromosomal DNA, and the tumor cells become resensitized. The discovery that cancer cells can evade drug therapy by this “hide-and-seek” mechanism may help to optimize the dosing schedule of erlotinib in glioblastoma patients. D. A. Nathanson, B. Gini, J. Mottahedeh, K. Visnyei, T. Koga, G. Gomez, A. Eskin, K. Hwang, J. Wang, K. Masui, A. Paucar, H. Yang, M. Ohashi, S. Zhu, J. Wykosky, R. Reed, S. F. Nelson, T. F. Cloughesy, C. D. James, P. N. Rao, H. I. Kornblum, J. R. Heath, W. K. Cavenee, F. B. Furnari, P. S. Mischel, Targeted therapy resistance mediated by dynamic regulation of extrachromosomal mutant EGFR DNA. Science 343 , 72–76 (2014). [Abstract][Full Text]", "We propose a novel approach for real-time privacy preserving traffic filtering based on entropy estimation. The decision of the real-time classifier is based on the entropy of the payload from first packet of a flow. The aim of the classifier is to detect traffic with encrypted payload. As a proof of concept we show the applicability of our approach as a traffic filter for a Skype detection engine. Traces collected in laboratory and real-world environments show that the traffic is reduced by a reasonable amount while achieving similar or even improved detection quality.", "We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expansion of the entropy function to prove almost sure consistency and central limit theorems for three of the most commonly used discretized information estimators. The setup is related to Grenander's method of sieves and places no assumptions on the underlying probability measure generating the data. Second, we prove a converse to these consistency theorems, demonstrating that a misapplication of the most common estimation techniques leads to an arbitrarily poor estimate of the true information, even given unlimited data. This \"inconsistency\" theorem leads to an analytical approximation of the bias, valid in surprisingly small sample regimes and more accurate than the usual 1 N formula of Miller and Madow over a large region of parameter space. The two most practical implications of these results are negative: (1) information estimates in a certain data regime are likely contaminated by bias, even if \"bias-corrected\" estimators are used, and (2) confidence intervals calculated by standard techniques drastically underestimate the error of the most common estimation methods.Finally, we note a very useful connection between the bias of entropy estimators and a certain polynomial approximation problem. By casting bias calculation problems in this approximation theory framework, we obtain the best possible generalization of known asymptotic bias results. More interesting, this framework leads to an estimator with some nice properties: the estimator comes equipped with rigorous bounds on the maximum error over all possible underlying probability distributions, and this maximum error turns out to be surprisingly small. We demonstrate the application of this new estimator on both real and simulated data.", "", "Consider a sequence p sub N of discrete probability measures, supported on m sub N points, and assume that we observe N independent and identically distributed (i.i.d.) samples from each p sub N . We demonstrate the existence of an estimator of the entropy, H(p sub N ), which is consistent even if the ratio N m sub N is bounded (and, as a corollary, even if this ratio tends to zero, albeit at a sufficiently slow rate).", "Leakage of confidential information from an organization's networks has become a big threat to its information security. Egress monitoring and filtering have thus become popular for detecting such security breaches. Egress monitoring tools scan outgoing packets for keywords or their combinations present in the confidential documents. These content filtering techniques however fail when the data is encrypted. The solution proposed in this thesis is simple yet an effective approach to prevent information leakage when the data is encrypted. We assume that a policy is in place which disallows encrypted content from specific hosts, ports and applications and wish to detect any violations to this policy. This work aims at analyzing encrypted and unencrypted traffic flows across a gateway and detecting unauthorized encrypted traffic flows. The work discusses a low level approach to detect encryption, based on entropy calculation and packet analysis. The technique is based on the fact that encrypted data consists of a random distribution of symbols whose entropy is expected to be quite high as compared to an unencrypted file. Techniques to differentiate between encrypted and high entropy compressed traffic are also discussed. This thesis implements and compares statistical methods for a fast online detection of encrypted traffic from all the other unencrypted traffic flowing across a network." ] }
1905.11873
2953225812
As the size and source of network traffic increase, so does the challenge of monitoring and analyzing network traffic. Therefore, sampling algorithms are often used to alleviate these scalability issues. However, the use of high entropy data streams, through the use of either encryption or compression, further compounds the challenge as current state-of-the-art algorithms cannot accurately and efficiently differentiate between encrypted and compressed packets. In this paper, we propose a novel traffic classification method named High Entropy DistinGuishEr (HEDGE) to distinguish between compressed and encrypted traffic. HEDGE is based on the evaluation of the randomness of the data streams and can be applied to individual packets without the need to have access to the entire stream. The findings from the evaluation show that our approach outperforms current state of the art. We also make available our statistically sound dataset, based on known benchmarks, to the wider research community.
Existing methods generally rely on continuous traffic information to enhance their accuracy by collecting information about packet transmission, the beginning and the end of a connection or of a file and etc. Therefore, real-time monitoring is inefficient using such schemes, since they require the analysis of huge volumes of data. However, these schemes can be useful in studying past-events or analysing only specific connections. Hence, to enable real-time monitoring, our aim is to analyse the payload of a random subset of packets and infer as much information as possible. Similar to our approach, the work of @cite_51 presents the first, to the best of our knowledge, technique to distinguish encrypted from compressed unencrypted network transmissions by analysing random packets. The authors applied three machine learning models, with the convolutional neural network (CNN) model achieving the best results. However, more efficient solutions are needed to perform practical real-time traffic classification even in the case of random data stream analysis, as the methods applied in @cite_51 (i.e. CNN and @math -NN) are computationally expensive and require proper training.
{ "cite_N": [ "@cite_51" ], "mid": [ "2801973044" ], "abstract": [ "Data encryption is the primary method of protecting the privacy of consumer device Internet communications from network observers. The ability to automatically detect unencrypted data in network traffic is therefore an essential tool for auditing Internet-connected devices. Existing methods identify network packets containing cleartext but cannot differentiate packets containing encrypted data from packets containing compressed unencrypted data, which can be easily recovered by reversing the compression algorithm. This makes it difficult for consumer protection advocates to identify devices that risk user privacy by sending sensitive data in a compressed unencrypted format. Here, we present the first technique to automatically distinguish encrypted from compressed unencrypted network transmissions on a per-packet basis. We apply three machine learning models and achieve a maximum 66.9 accuracy with a convolutional neural network trained on raw packet data. This result is a baseline for this previously unstudied machine learning problem, which we hope will motivate further attention and accuracy improvements. To facilitate continuing research on this topic, we have made our training and test datasets available to the public." ] }
1905.11910
2947541644
In this paper, we introduce Channel-wise recurrent convolutional neural networks (RecNets), a family of novel, compact neural network architectures for computer vision tasks inspired by recurrent neural networks (RNNs). RecNets build upon Channel-wise recurrent convolutional (CRC) layers, a novel type of convolutional layer that splits the input channels into disjoint segments and processes them in a recurrent fashion. In this way, we simulate wide, yet compact models, since the number of parameters is vastly reduced via the parameter sharing of the RNN formulation. Experimental results on the CIFAR-10 and CIFAR-100 image classification tasks demonstrate the superior size-accuracy trade-off of RecNets compared to other compact state-of-the-art architectures.
Since the advent of AlexNet @cite_6 , the first deep CNN to advance the state-of-the-art in a highly challenging image classification task, building deeper and wider neural networks has been the main research direction in order to further improve performance in a variety of visual recognition tasks. For instance, top results in the image classification task of the ILSVRC2017 challenge were achieved using an ensemble of DualPathNets @cite_36 with the number of layers ranging from 98 to 131 and the largest model requiring 304 MB of storage and 16 GFLOPs of computation. While such requirements can easily be met by server-class computing systems, this is not the case for less powerful mobile or embedded devices, thus limiting the deployment of state-of-the-art models in many real-life applications. The limitation of computational resources has inspired researches to follow two main directions:
{ "cite_N": [ "@cite_36", "@cite_6" ], "mid": [ "2964166828", "2163605009" ], "abstract": [ "In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26 smaller model size, 25 less computational cost and 8 lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ] }
1905.11910
2947541644
In this paper, we introduce Channel-wise recurrent convolutional neural networks (RecNets), a family of novel, compact neural network architectures for computer vision tasks inspired by recurrent neural networks (RNNs). RecNets build upon Channel-wise recurrent convolutional (CRC) layers, a novel type of convolutional layer that splits the input channels into disjoint segments and processes them in a recurrent fashion. In this way, we simulate wide, yet compact models, since the number of parameters is vastly reduced via the parameter sharing of the RNN formulation. Experimental results on the CIFAR-10 and CIFAR-100 image classification tasks demonstrate the superior size-accuracy trade-off of RecNets compared to other compact state-of-the-art architectures.
Many architectural improvements have been devised to improve the cost efficiency of CNNs, since carefully designed building blocks lead to compact models with compelling accuracy. A characteristic example of such a design is a multi-branched convolution block: the input of the block is split into different (low-dimensional) tensors, each branch performs a transformation on these tensors and the transformed tensors are merged forming the block's output. GoogLeNet @cite_9 and FractalNet @cite_25 are characteristic examples of such a design, while ResNet @cite_8 can be seen as an extreme case where one branch is the identity transform.
{ "cite_N": [ "@cite_9", "@cite_25", "@cite_8" ], "mid": [ "2097117768", "", "2194775991" ], "abstract": [ "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1905.11910
2947541644
In this paper, we introduce Channel-wise recurrent convolutional neural networks (RecNets), a family of novel, compact neural network architectures for computer vision tasks inspired by recurrent neural networks (RNNs). RecNets build upon Channel-wise recurrent convolutional (CRC) layers, a novel type of convolutional layer that splits the input channels into disjoint segments and processes them in a recurrent fashion. In this way, we simulate wide, yet compact models, since the number of parameters is vastly reduced via the parameter sharing of the RNN formulation. Experimental results on the CIFAR-10 and CIFAR-100 image classification tasks demonstrate the superior size-accuracy trade-off of RecNets compared to other compact state-of-the-art architectures.
Closer to our work, some recent architectures, such as DenseNets @cite_28 and DualPathNets @cite_36 , improve parameter efficiency by increasing the connectivity between different layers of the network. This connectivity promotes feature reuse and encourages feature exploration by combining features of different layers and receptive fields. Moreover, it alleviates the vanishing-gradient problem, assisting the training of well-performing deep models.
{ "cite_N": [ "@cite_28", "@cite_36" ], "mid": [ "2963446712", "2964166828" ], "abstract": [ "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally. By revealing the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, we find that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. To enjoy the benefits from both path topologies, our proposed Dual Path Network shares common features while maintaining the flexibility to explore new features through dual path architectures. Extensive experiments on three benchmark datasets, ImagNet-1k, Places365 and PASCAL VOC, clearly demonstrate superior performance of the proposed DPN over state-of-the-arts. In particular, on the ImagNet-1k dataset, a shallow DPN surpasses the best ResNeXt-101(64x4d) with 26 smaller model size, 25 less computational cost and 8 lower memory consumption, and a deeper DPN (DPN-131) further pushes the state-of-the-art single model performance with about 2 times faster training speed. Experiments on the Places365 large-scale scene dataset, PASCAL VOC detection dataset, and PASCAL VOC segmentation dataset also demonstrate its consistently better performance than DenseNet, ResNet and the latest ResNeXt model over various applications." ] }
1905.11583
2947263925
A fundamental issue in reinforcement learning algorithms is the balance between exploration of the environment and exploitation of information already obtained by the agent. Especially, exploration has played a critical role for both efficiency and efficacy of the learning process. However, Existing works for exploration involve task-agnostic design, that is performing well in one environment, but be ill-suited to another. To the purpose of learning an effective and efficient exploration policy in an automated manner. We formalized a feasible metric for measuring the utility of exploration based on counterfactual ideology. Based on that, We proposed an end-to-end algorithm to learn exploration policy by meta-learning. We demonstrate that our method achieves good results compared to previous works in the high-dimensional control tasks in MuJoCo simulator.
For fast adaption to learning different tasks of learning strategies, the idea of meta learning, or learning to learn, has been studied for decades . To our knowledgement, the most related work of ours are @cite_1 @cite_5 @cite_3 @cite_0 . @cite_1 proposed a meta-learner, which could learn to turn hyper-parameters of return function, to learn directly by meta learning to maximize the accumulate reward during a single lifetime of interactions with task environment. @cite_5 introduced a algorithms that could learn task specific exploration strategies using prior experience based on MAML, comparing to learning exploration strategy for various of environments, we are more interesting in learn exploration strategy that improve effiecncy during learning. Our approach is most closely related to the work of @cite_3 , which has proposed a framework that improves sample-efficiency of DDPG on various of different tasks by offering a meta-reward for exploration policy. However, the exploration not only play the role for better efficacy but also take efficiency into account. Our approach is trying to find the most efficient exploration policy to generating samples that could potentially maximizing the expect reward of agent during off-policy training.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_1", "@cite_3" ], "mid": [ "2907386760", "2788904251", "2964296021", "2792305967" ], "abstract": [ "We describe MELEE, a meta-learning algorithm for learning a good exploration policy in the interactive contextual bandit setting. Here, an algorithm must take actions based on contexts, and learn based only on a reward signal from the action taken, thereby generating an exploration exploitation trade-off. MELEE addresses this trade-off by learning a good exploration strategy for offline tasks based on synthetic data, on which it can simulate the contextual bandit setting. Based on these simulations, MELEE uses an imitation learning strategy to learn a good exploration policy that can then be applied to true contextual bandit tasks at test time. We compare MELEE to seven strong baseline contextual bandit algorithms on a set of three hundred real-world datasets, on which it outperforms alternatives in most settings, especially when differences in rewards are large. Finally, we demonstrate the importance of having a rich feature representation for learning how to explore.", "Exploration is a fundamental challenge in reinforcement learning (RL). Many of the current exploration methods for deep RL use task-agnostic objectives, such as information gain or bonuses based on state visitation. However, many practical applications of RL involve learning more than a single task, and prior tasks can be used to inform how exploration should be performed in new tasks. In this work, we explore how prior tasks can inform an agent about how to explore effectively in new situations. We introduce a novel gradient-based fast adaptation algorithm -- model agnostic exploration with structured noise (MAESN) -- to learn exploration strategies from prior experience. The prior experience is used both to initialize a policy and to acquire a latent exploration space that can inject structured stochasticity into a policy, producing exploration strategies that are informed by prior knowledge and are more effective than random action-space noise. We show that MAESN is more effective at learning exploration strategies when compared to prior meta-RL methods, RL without learned exploration strategies, and task-agnostic exploration methods. We evaluate our method on a variety of simulated tasks: locomotion with a wheeled robot, locomotion with a quadrupedal walker, and object manipulation.", "The goal of reinforcement learning algorithms is to estimate and or optimise the value function. However, unlike supervised learning, no teacher or oracle is available to provide the true value function. Instead, the majority of reinforcement learning algorithms estimate and or optimise a proxy for the value function. This proxy is typically based on a sampled and bootstrapped approximation to the true value function, known as a . The particular choice of return is one of the chief components determining the nature of the algorithm: the rate at which future rewards are discounted; when and how values should be bootstrapped; or even the nature of the rewards themselves. It is well-known that these decisions are crucial to the overall success of RL algorithms. We introduce a novel, gradient-based meta-learning algorithm that is able to adapt the nature of the return, online, whilst interacting and learning from the environment. When applied to 57 games on the Atari 2600 environment over 200 million frames, our algorithm achieved a new state-of-the-art.", "The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy. Existing exploration methods are mostly based on adding noise to the on-going actor policy and can only explore regions close to what the actor policy dictates. In this work, we develop a simple meta-policy gradient algorithm that allows us to adaptively learn the exploration policy in DDPG. Our algorithm allows us to train flexible exploration behaviors that are independent of the actor policy, yielding a that significantly speeds up the learning process. With an extensive study, we show that our method significantly improves the sample-efficiency of DDPG on a variety of reinforcement learning tasks." ] }
1905.11806
2947317857
In a human-machine dialog scenario, deciding the appropriate time for the machine to take the turn is an open research problem. In contrast, humans engaged in conversations are able to timely decide when to interrupt the speaker for competitive or non-competitive reasons. In state-of-the-art turn-by-turn dialog systems the decision on the next dialog action is taken at the end of the utterance. In this paper, we propose a token-by-token prediction of the dialog state from incremental transcriptions of the user utterance. To identify the point of maximal understanding in an ongoing utterance, we a) implement an incremental Dialog State Tracker which is updated on a token basis (iDST) b) re-label the Dialog State Tracking Challenge 2 (DSTC2) dataset and c) adapt it to the incremental turn-taking experimental scenario. The re-labeling consists of assigning a binary value to each token in the user utterance that allows to identify the appropriate point for taking the turn. Finally, we implement an incremental Turn Taking Decider (iTTD) that is trained on these new labels for the turn-taking decision. We show that the proposed model can achieve a better performance compared to a deterministic handcrafted turn-taking algorithm.
To coordinate smooth exchange for speaking turns, @cite_20 made use of prosodic, syntactic and gesture features for detecting suitable feedback response locations in the user speech. To cope with incorrectly segmented utterances, @cite_29 proposed an a posteriori restoration methodology. To better understand the behavior of the human counterpart, @cite_19 tried to simulate the user by creating a model that takes into account both her initial goal and responses during the conversation.
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_20" ], "mid": [ "2952798561", "2777714617", "2087762538" ], "abstract": [ "We introduce end-to-end neural network based models for simulating users of task-oriented dialogue systems. User simulation in dialogue systems is crucial from two different perspectives: (i) automatic evaluation of different dialogue models, and (ii) training task-oriented dialogue systems. We design a hierarchical sequence-to-sequence model that first encodes the initial user goal and system turns into fixed length representations using Recurrent Neural Networks (RNN). It then encodes the dialogue history using another RNN layer. At each turn, user responses are decoded from the hidden representations of the dialogue level RNN. This hierarchical user simulator (HUS) approach allows the model to capture undiscovered parts of the user goal without the need of an explicit dialogue state tracking. We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal. We evaluate the proposed models on movie ticket booking domain by systematically interacting each user simulator with various dialogue system policies trained with different objectives and users.", "Ideally, the users of spoken dialogue systems should be able to speak at their own tempo. Thus, the systems needs to interpret utterances from various users correctly, even when the utterances contain pauses. In response to this issue, we propose an approach based on a posteriori restoration for incorrectly segmented utterances. A crucial part of this approach is to determine whether restoration is required. We use a classification-based approach, adapted to each user. We focus on each user’s dialogue tempo, which can be obtained during the dialogue, and determine the correlation between each user’s tempo and the appropriate thresholds for classification. A linear regression function used to convert the tempos into thresholds is also derived. Experimental results show that the proposed user adaptation approach applied to two restoration classification methods, thresholding and decision trees, improves classification accuracies by 3.0 and 7.4 , respectively, in cross validation.", "Traditional dialogue systems use a fixed silence threshold to detect the end of users' turns. Such a simplistic model can result in system behaviour that is both interruptive and unresponsive, whic ..." ] }
1905.11806
2947317857
In a human-machine dialog scenario, deciding the appropriate time for the machine to take the turn is an open research problem. In contrast, humans engaged in conversations are able to timely decide when to interrupt the speaker for competitive or non-competitive reasons. In state-of-the-art turn-by-turn dialog systems the decision on the next dialog action is taken at the end of the utterance. In this paper, we propose a token-by-token prediction of the dialog state from incremental transcriptions of the user utterance. To identify the point of maximal understanding in an ongoing utterance, we a) implement an incremental Dialog State Tracker which is updated on a token basis (iDST) b) re-label the Dialog State Tracking Challenge 2 (DSTC2) dataset and c) adapt it to the incremental turn-taking experimental scenario. The re-labeling consists of assigning a binary value to each token in the user utterance that allows to identify the appropriate point for taking the turn. Finally, we implement an incremental Turn Taking Decider (iTTD) that is trained on these new labels for the turn-taking decision. We show that the proposed model can achieve a better performance compared to a deterministic handcrafted turn-taking algorithm.
To improve the efficiency of the dialogue, @cite_9 defined a turn-taking phenomenon taxonomy, and showed that only some phenomena are worth replicating. ASR and NLU features have been exploited by @cite_4 and @cite_24 in order to detect the end of the turn in an incremental setting. They showed that the combination of prosodic and lexical features can lead to promising results. A turn-taking model based on multitask learning was proposed by @cite_25 , which also took into account the prediction of backchannels and fillers. An incremental turn-taking model with active system barge-in was proposed by @cite_28 , who modeled the turn-taking problem as a Finite State Machine and learned the turn-taking policy by means of reinforcement learning.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_9", "@cite_24", "@cite_25" ], "mid": [ "1502481855", "1973933596", "2251062710", "1998677696", "2889445100" ], "abstract": [ "We define the task of incremental or 0lag utterance segmentation, that is, the task of segmenting an ongoing speech recognition stream into utterance units, and present first results. We use a combination of hidden event language model, features from an incremental parser, and acoustic prosodic features to train classifiers on real-world conversational data (from the Switchboard corpus). The best classifiers reach an F-score of around 56 , improving over baseline and related work.", "We present a general model and conceptual framework for specifying architectures for incremental processing in dialogue systems, in particular with respect to the topology of the network of modules that make up the system, the way information flows through this network, how information increments are 'packaged', and how these increments are processed by the modules. This model enables the precise specification of incremental systems and hence facilitates detailed comparisons between systems, as well as giving guidance on designing new systems.", "In this paper, a turn-taking phenomenon taxonomy is introduced, organised according to the level of information conveyed. It is aimed to provide a better grasp of the behaviours used by humans while talking to each other, so that they can be methodically replicated in spoken dialogue systems. Five interesting phenomena have been implemented in a simulated environment: the system barge-in with three variants (resulting from either an unclear, an incoherent or a sufficient user message), the feedback and the user barge-in. The experiments reported in the paper illustrate that how such phenomena are implemented is a delicate choice as their impact on the system’s performance is variable.", "This paper describes a novel algorithm to dynamically set endpointing thresholds based on a rich set of dialogue features to detect the end of user utterances in a dialogue system. By analyzing the relationship between silences in user's speech to a spoken dialogue system and a wide range of automatically extracted features from discourse, semantics, prosody, timing and speaker characteristics, we found that all features correlate with pause duration and with whether a silence indicates the end of the turn, with semantics and timing being the most informative. Based on these features, the proposed method reduces latency by up to 24 over a fixed threshold baseline. Offline evaluation results were confirmed by implementing the proposed algorithm in the Let's Go system.", "" ] }
1905.11806
2947317857
In a human-machine dialog scenario, deciding the appropriate time for the machine to take the turn is an open research problem. In contrast, humans engaged in conversations are able to timely decide when to interrupt the speaker for competitive or non-competitive reasons. In state-of-the-art turn-by-turn dialog systems the decision on the next dialog action is taken at the end of the utterance. In this paper, we propose a token-by-token prediction of the dialog state from incremental transcriptions of the user utterance. To identify the point of maximal understanding in an ongoing utterance, we a) implement an incremental Dialog State Tracker which is updated on a token basis (iDST) b) re-label the Dialog State Tracking Challenge 2 (DSTC2) dataset and c) adapt it to the incremental turn-taking experimental scenario. The re-labeling consists of assigning a binary value to each token in the user utterance that allows to identify the appropriate point for taking the turn. Finally, we implement an incremental Turn Taking Decider (iTTD) that is trained on these new labels for the turn-taking decision. We show that the proposed model can achieve a better performance compared to a deterministic handcrafted turn-taking algorithm.
Our problem setting is similar to the one posed by @cite_27 , where they exploited the ASR and NLU for learning the point of maximal understanding of an ongoing user utterance. In our case, we exploit the sole ASR 1-best hypothesis and the re-labeled dataset, and try to predict the dialog state of the full utterance before it has been completed.
{ "cite_N": [ "@cite_27" ], "mid": [ "1986532700" ], "abstract": [ "We investigate novel approaches to responsive overlap behaviors in dialogue systems, opening possibilities for systems to interrupt, acknowledge or complete a user's utterance while it is still in progress. Our specific contributions are a method for determining when a system has reached a point of maximal understanding of an ongoing user utterance, and a prototype implementation that shows how systems can use this ability to strategically initiate system completions of user utterances. More broadly, this framework facilitates the implementation of a range of overlap behaviors that are common in human dialogue, but have been largely absent in dialogue systems." ] }
1905.11780
2952357958
This work investigates how context should be taken into account when performing continuous authentication of a smartphone user based on touchscreen and accelerometer readings extracted from swipe gestures. The study is conducted on the publicly available HMOG dataset consisting of 100 study subjects performing pre-defined reading and navigation tasks while sitting and walking. It is shown that context-specific models are needed for different smartphone usage and human activity scenarios to minimize authentication error. Also, the experimental results suggests that utilization of phone movement improves swipe gesture-based verification performance only when the user is moving.
Traditional "obtrusive" authentication schemes, like passwords, PIN codes and biometrics, do not provide mechanisms to determine whether an active mobile device is being used by the same (or some other) authorized person after the initial access has been granted. Continuous authentication (CA), also referred to as active or implicit authentication, aims at verifying that a device is being used by a legitimate person after login by constantly monitoring the built-in sensor and device usage data, like (partial) face images, touchscreen gestures, device motion, power consumption, in the background (transparently to the user) @cite_9 .
{ "cite_N": [ "@cite_9" ], "mid": [ "2471556897" ], "abstract": [ "Recent developments in sensing and communication technologies have led to an explosion in the use of mobile devices such as smartphones and tablets. With the increase in the use of mobile devices, users must constantly worry about security and privacy, as the loss of a mobile device could compromise personal information. To deal with this problem, continuous authentication systems (also known as active authentication systems) have been proposed, in which users are continuously monitored after initial access to the mobile device. In this article, we provide an overview of different continuous authentication methods on mobile devices. We discuss the merits and drawbacks of the available approaches and identify promising avenues of research in this rapidly evolving field." ] }
1905.11780
2952357958
This work investigates how context should be taken into account when performing continuous authentication of a smartphone user based on touchscreen and accelerometer readings extracted from swipe gestures. The study is conducted on the publicly available HMOG dataset consisting of 100 study subjects performing pre-defined reading and navigation tasks while sitting and walking. It is shown that context-specific models are needed for different smartphone usage and human activity scenarios to minimize authentication error. Also, the experimental results suggests that utilization of phone movement improves swipe gesture-based verification performance only when the user is moving.
Touchscreen gesture-based user verification has been a popular approach in CA @cite_9 . Touch input is directly related to the actual physical interaction with the mobile device, thus could be probably used for fast intrusion detection. Most of the existing works have focused on analysing single-finger swipes, i.e. drag and flick, but also other single and multi-finger gestures, like tap typing, pinch and spread, could be used for touch biometrics. Intuitively, unique phone motion patterns may be observed while user is using touchscreen, thus joint analysis of touch and consequent motion signals has been proposed for CA @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2565429718", "2471556897" ], "abstract": [ "We studied the fusion of three biometric authentication modalities, namely, swiping gestures, typing patterns and the phone movement patterns observed during typing or swiping. A web browser was customized to collect the data generated from the aforementioned modalities over four to seven days in an unconstrained environment. Several features were extracted by using sliding window mechanism for each modality and analyzed by using information gain, correlation, and symmetric uncertainty. Finally, five features from windows of continuous swipes, thirty features from windows of continuously typed letters, and nine features from corresponding phone movement patterns while swiping typing were used to build the authentication system. We evaluated the performance of each modality and their fusion over a dataset of 28 users. The feature-level fusion of swiping and the corresponding phone movement patterns achieved an authentication accuracy of 93.33 , whereas, the score-level fusion of typing behaviors and the corresponding phone movement patterns achieved an authentication accuracy of 89.31 .", "Recent developments in sensing and communication technologies have led to an explosion in the use of mobile devices such as smartphones and tablets. With the increase in the use of mobile devices, users must constantly worry about security and privacy, as the loss of a mobile device could compromise personal information. To deal with this problem, continuous authentication systems (also known as active authentication systems) have been proposed, in which users are continuously monitored after initial access to the mobile device. In this article, we provide an overview of different continuous authentication methods on mobile devices. We discuss the merits and drawbacks of the available approaches and identify promising avenues of research in this rapidly evolving field." ] }
1905.11780
2952357958
This work investigates how context should be taken into account when performing continuous authentication of a smartphone user based on touchscreen and accelerometer readings extracted from swipe gestures. The study is conducted on the publicly available HMOG dataset consisting of 100 study subjects performing pre-defined reading and navigation tasks while sitting and walking. It is shown that context-specific models are needed for different smartphone usage and human activity scenarios to minimize authentication error. Also, the experimental results suggests that utilization of phone movement improves swipe gesture-based verification performance only when the user is moving.
A major limitation with prior works on touch biometrics, and CA in general, has been that phone usage and human activity contexts have not been properly taken into account. It can be expected that touchscreen gestures and phone movement patterns have significant differences depending whether the user is browsing or reading (phone usage), or is stationary or moving (human activity), which suggests that CA systems need to be context-aware. Furthermore, phone usage context also defines whether authentication should be performed in the first place. For instance, user verification is not probably needed for casual browsing, while it is crucial if private or confidential data is being accessed @cite_11 .
{ "cite_N": [ "@cite_11" ], "mid": [ "2098617375" ], "abstract": [ "Implicit authentication schemes are a secondary authentication mechanism that provides authentication by employing unique patterns of device use that are gathered from smartphone users without requiring deliberate actions. Contemporary implicit authentication schemes operate at the device level such that they neither discriminate between data from different applications nor make any assumption about the nature of the application that the user is currently using. In this paper, we challenge the device-centric approach to implicit authentication on smartphones. We argue that the conventional approach of misuse detection at the device level has inherent limitations for mobile platforms. To this end, we analyze and empirically evaluate the device-centric nature of implicit authentication schemes to show their limitations in terms of detection accuracy, authentication overhead, and fine grained authentication control. To mitigate these limitations and for effective and pragmatic implicit authentication on the mobile platform, we propose a novel application-centric implicit authentication approach. We observe that for implicit authentication, an application knows best on when to authenticate and how to authenticate. Therefore, we delegate the implicit authentication task to the application and let the application provider decide when and how to authenticate a user in order to protect the owner's personal information. Our proposed application-centric implicit authentication approach improves accuracy and provides fine grained authentication control with low authentication overhead. Future research in this domain will benefit from our findings to provide pragmatic implicit authentication solutions." ] }
1905.11780
2952357958
This work investigates how context should be taken into account when performing continuous authentication of a smartphone user based on touchscreen and accelerometer readings extracted from swipe gestures. The study is conducted on the publicly available HMOG dataset consisting of 100 study subjects performing pre-defined reading and navigation tasks while sitting and walking. It is shown that context-specific models are needed for different smartphone usage and human activity scenarios to minimize authentication error. Also, the experimental results suggests that utilization of phone movement improves swipe gesture-based verification performance only when the user is moving.
The preliminary studies @cite_11 @cite_2 @cite_10 @cite_3 have demonstrated that application or task specific (phone usage context) modelling can indeed boost the performance of swipe-based CA, while only marginal improvement has been achieved when human activity context has been considered @cite_3 . So far, human activity context-based models have shown to be useful only when CA is performed based on solely phone movement patterns @cite_8 or combined with typing (tap gestures) @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_3", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "1483327747", "2963930878", "1584195451", "1930853352", "2293043944", "2098617375" ], "abstract": [ "We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16 (walking) and 10.05 (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1 using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7 and 34.2 . We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9 without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones).", "Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1 accuracy with negligible system overhead and less than 2.4 battery consumption.", "With increasing privacy concerns and security demands present within mobile devices, behavioral biometric solutions, such as touch based user recognition, have been researched as of recent. However, several vital contextual behavior factors (i.e., screen size, physical and application context) and how those effect user identification performance, remains unaddressed in previous studies. In this paper we first introduce a context-aware mobile user recognition method. Then a comparative experiment to evaluate the impacts of these factors in relation to user identification performance is presented. Experimental results have demonstrated that a user's touch screen usage behavior may be affected given different contextual behavior information. Furthermore, several interesting occurrences have been found in the results: 1) screen size of a smartphone device changes the way a user touches and holds the device. A larger screen size will provide more potential methods of interacting with the device and in effect, a higher user recognition accuracy as well; and 2) application context and physical activity context can aid in achieving higher accuracy for user recognition.", "In this paper we will show that context has an influence on the performance of a continuous authentication system. When context is considered we notice that the performance of the system improves by a factor of approximately 3. Even when testing and training are not based on exactly the same task, but on a similar task, we see an improvement of the performance over a system where the context is not included. In fact, we proof that the performance of the system depends on which particular kind of task is used for the training.", "Handheld devices today do not continuously verify the identity of the user while sensitive activities are performed. This enables attackers, who can either compromise the initial password or grab the device after login, full access to sensitive data and applications on the device. To mitigate this risk, we propose continuous user monitoring using a machine learning based approach comprising of an ensemble of three distinct modalities: power consumption, touch gestures, and physical movement. Users perform different activities on different applications: we consider application context when we model user behavior. We employ anomaly detection algorithms for each modality and place a bound on the fraction of anomalous events that can be considered \"normal\" for any given user. We evaluated our system using data collected from 73 volunteer participants. We were able to verify that our system is functional in real-time while the end-user was utilizing popular mobile applications.", "Implicit authentication schemes are a secondary authentication mechanism that provides authentication by employing unique patterns of device use that are gathered from smartphone users without requiring deliberate actions. Contemporary implicit authentication schemes operate at the device level such that they neither discriminate between data from different applications nor make any assumption about the nature of the application that the user is currently using. In this paper, we challenge the device-centric approach to implicit authentication on smartphones. We argue that the conventional approach of misuse detection at the device level has inherent limitations for mobile platforms. To this end, we analyze and empirically evaluate the device-centric nature of implicit authentication schemes to show their limitations in terms of detection accuracy, authentication overhead, and fine grained authentication control. To mitigate these limitations and for effective and pragmatic implicit authentication on the mobile platform, we propose a novel application-centric implicit authentication approach. We observe that for implicit authentication, an application knows best on when to authenticate and how to authenticate. Therefore, we delegate the implicit authentication task to the application and let the application provider decide when and how to authenticate a user in order to protect the owner's personal information. Our proposed application-centric implicit authentication approach improves accuracy and provides fine grained authentication control with low authentication overhead. Future research in this domain will benefit from our findings to provide pragmatic implicit authentication solutions." ] }
1905.11780
2952357958
This work investigates how context should be taken into account when performing continuous authentication of a smartphone user based on touchscreen and accelerometer readings extracted from swipe gestures. The study is conducted on the publicly available HMOG dataset consisting of 100 study subjects performing pre-defined reading and navigation tasks while sitting and walking. It is shown that context-specific models are needed for different smartphone usage and human activity scenarios to minimize authentication error. Also, the experimental results suggests that utilization of phone movement improves swipe gesture-based verification performance only when the user is moving.
In this work, we investigate the role of context when CA is conducted based on touchscreen and accelerometer readings extracted from swipe gestures. Our experimental analysis is performed on the publicly available HMOG dataset @cite_7 consisting of 100 subjects each performing pre-defined reading and navigation tasks while sitting and walking. We show that both phone usage and human activity context should be considered in swipe gesture-based CA. In addition, our findings suggest that swipe-based CA should rely solely on touch signals when the user is stationary, while inclusion of phone movement patterns improve CA performance only when the user is moving.
{ "cite_N": [ "@cite_7" ], "mid": [ "1483327747" ], "abstract": [ "We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16 (walking) and 10.05 (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1 using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7 and 34.2 . We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9 without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones)." ] }
1905.11880
2949927268
Modern malware can take various forms, and has reached a very high level of sophistication in terms of its penetration, persistence, communication and hiding capabilities. The use of cryptography, and of covert communication channels over public and widely used protocols and services, is becoming a norm. In this work, we start by introducing Resource Identifier Generation Algorithms. These are an extension of a well-known mechanism called Domain Generation Algorithms, which are frequently employed by cybercriminals for bot management and communication. Our extension allows, beyond DNS, the use of other protocols. More concretely, we showcase the exploitation of the InterPlanetary file system (IPFS). This is a solution for the "permanent web", which enjoys a steadily growing community interest and adoption. The IPFS is, in addition, one of the most prominent solutions for blockchain storage. We go beyond the straightforward case of using the IPFS for hosting malicious content, and explore ways in which a botmaster could employ it, to manage her bots, validating our findings experimentally. Finally, we discuss the advantages of our approach for malware authors, its efficacy and highlight its extensibility for other distributed storage services.
Nowadays, most malware-based campaigns rely on the use of botnets, commanded by remote servers (i.e. C &C servers), which send instructions orders to infected devices @cite_31 . In the past, the mechanism used to find such C &C servers was to hardcode IP addresses in the malware. However, such technique entailed a set of drawbacks for the attackers @cite_0 (e.g. easy to take-down). Therefore, botnets evolved into peer-to-peer (P2P) botnets @cite_35 , which adopted a myriad of techniques such as the approach, which imitates content distribution networks by resolving a domain name to multiple IP addresses @cite_38 @cite_4 .
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_4", "@cite_0", "@cite_31" ], "mid": [ "187875600", "80155331", "", "2342718701", "2748868501" ], "abstract": [ "Botnets have recently been identified as one of the most important threats to the security of the Internet. Traditionally, botnets organize themselves in an hierarchical manner with a central command and control location. This location can be statically defined in the bot, or it can be dynamically defined based on a directory server. Presently, the centralized characteristic of botnets is useful to security professionals because it offers a central point of failure for the botnet. In the near future, we believe attackers will move to more resilient architectures. In particular, one class of botnet structure that has entered initial stages of development is peer-to-peer based architectures. In this paper, we present an overview of peer-to-peer botnets. We also present a case study of a Kademlia-based Trojan.Peacomm bot.", "We present the first empirical study of fast-flux service networks (FFSNs), a newly emerging and still not widelyknown phenomenon in the Internet. FFSNs employ DNS to establish a proxy network on compromised machines through which illegal online services can be hosted with very high availability. Through our measurements we show that the threat which FFSNs pose is significant: FFSNs occur on a worldwide scale and already host a substantial percentage of online scams. Based on analysis of the principles of FFSNs, we develop a metric with which FFSNs can be effectively detected. Considering our detection technique we also discuss possible mitigation strategies.", "", "Devices infected with malicious software typically form botnet armies under the influence of one or more command and control (C&C) servers. The botnet problem reached such levels where federal law enforcement agencies have to step in and take actions against botnets by disrupting (or “taking down”) their C&Cs, and thus their illicit operations. Lately, more and more private companies have started to independently take action against botnet armies, primarily focusing on their DNS-based C&Cs. While well-intentioned, their C&C takedown methodology is in most cases ad-hoc, and limited by the breadth of knowledge available around the malware that facilitates the botnet. With this paper, we aim to bring order, measure, and reason to the botnet takedown problem. We improve an existing takedown analysis system called rza . Specifically, we examine additional botnet takedowns, enhance the risk calculation to use botnet population counts, and include a detailed discussion of policy improvements that can be made to improve takedowns. As part of our system evaluation, we perform a postmortem analysis of the recent 3322.org, Citadel, and No-IP takedowns.", "The Mirai botnet, composed primarily of embedded and IoT devices, took the Internet by storm in late 2016 when it overwhelmed several high-profile targets with massive distributed denial-of-service (DDoS) attacks. In this paper, we provide a seven-month retrospective analysis of Mirai's growth to a peak of 600k infections and a history of its DDoS victims. By combining a variety of measurement perspectives, we analyze how the botnet emerged, what classes of devices were affected, and how Mirai variants evolved and competed for vulnerable hosts. Our measurements serve as a lens into the fragile ecosystem of IoT devices. We argue that Mirai may represent a sea change in the evolutionary development of botnets--the simplicity through which devices were infected and its precipitous growth, demonstrate that novice malicious techniques can compromise enough low-end devices to threaten even some of the best-defended targets. To address this risk, we recommend technical and nontechnical interventions, as well as propose future research directions." ] }
1905.11880
2949927268
Modern malware can take various forms, and has reached a very high level of sophistication in terms of its penetration, persistence, communication and hiding capabilities. The use of cryptography, and of covert communication channels over public and widely used protocols and services, is becoming a norm. In this work, we start by introducing Resource Identifier Generation Algorithms. These are an extension of a well-known mechanism called Domain Generation Algorithms, which are frequently employed by cybercriminals for bot management and communication. Our extension allows, beyond DNS, the use of other protocols. More concretely, we showcase the exploitation of the InterPlanetary file system (IPFS). This is a solution for the "permanent web", which enjoys a steadily growing community interest and adoption. The IPFS is, in addition, one of the most prominent solutions for blockchain storage. We go beyond the straightforward case of using the IPFS for hosting malicious content, and explore ways in which a botmaster could employ it, to manage her bots, validating our findings experimentally. Finally, we discuss the advantages of our approach for malware authors, its efficacy and highlight its extensibility for other distributed storage services.
On top of the previously discussed approaches, malware use DGAs, which implement a deterministic pseudo-random generator (PRNG) to create a set of domain names @cite_6 @cite_13 . Hence, infected devices check the list of generated domains and perform queries until they find the C &C server, whose location may also change dynamically. In this scenario, blacklisting domains is rendered useless as it implies many practical issues.
{ "cite_N": [ "@cite_13", "@cite_6" ], "mid": [ "1981049515", "2487301225" ], "abstract": [ "In this paper, we present FluxBuster, a novel passive DNS traffic analysis system for detecting and tracking malicious flux networks. FluxBuster applies large-scale monitoring of DNS traffic traces generated by recursive DNS (RDNS) servers located in hundreds of different networks scattered across several different geographical locations. Unlike most previous work, our detection approach is not limited to the analysis of suspicious domain names extracted from spam emails or precompiled domain blacklists. Instead, FluxBuster is able to detect malicious flux service networks in-the-wild, i.e., as they are \"accessed” by users who fall victim of malicious content, independently of how this malicious content was advertised. We performed a long-term evaluation of our system spanning a period of about five months. The experimental results show that FluxBuster is able to accurately detect malicious flux networks with a low false positive rate. Furthermore, we show that in many cases FluxBuster is able to detect malicious flux domains several days or even weeks before they appear in public domain blacklists.", "Domain-generation algorithms (DGAs) allow attackers to manage infection-spreading websites and command-and-control (CaC) deployments by altering domain names on a timely basis. DGAs have made the infection and CaC architecture more robust and supportive for attackers. This detailed taxonomy of DGAs highlights the problem and offers solutions to combat DGAs through detection of drive-by download and CaC activity from the compromised machine." ] }
1905.11880
2949927268
Modern malware can take various forms, and has reached a very high level of sophistication in terms of its penetration, persistence, communication and hiding capabilities. The use of cryptography, and of covert communication channels over public and widely used protocols and services, is becoming a norm. In this work, we start by introducing Resource Identifier Generation Algorithms. These are an extension of a well-known mechanism called Domain Generation Algorithms, which are frequently employed by cybercriminals for bot management and communication. Our extension allows, beyond DNS, the use of other protocols. More concretely, we showcase the exploitation of the InterPlanetary file system (IPFS). This is a solution for the "permanent web", which enjoys a steadily growing community interest and adoption. The IPFS is, in addition, one of the most prominent solutions for blockchain storage. We go beyond the straightforward case of using the IPFS for hosting malicious content, and explore ways in which a botmaster could employ it, to manage her bots, validating our findings experimentally. Finally, we discuss the advantages of our approach for malware authors, its efficacy and highlight its extensibility for other distributed storage services.
In general, arithmetic-based DGAs use a PRNG to combine a set of characters (typically ASCII) and create a domain name. In the case of hash-based DGAs, attackers basically create domains using the hex representation of a hash. To detect both DGA-based families, methods reported in the literature use features such as entropy, length or lexical characteristics to determine whether a DGA has generated a domain name or not @cite_1 @cite_3 , as well as characteristics such as traffic information (e.g. NXDomain queries or WHOIS information) @cite_20 @cite_10 @cite_22 @cite_11 @cite_30 @cite_29 @cite_2 .
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_29", "@cite_1", "@cite_3", "@cite_2", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "1983776999", "1561983441", "", "2169172206", "1981294881", "1936523258", "2102283838", "2591856843", "196740607" ], "abstract": [ "Enterprises routinely collect terabytes of security relevant data, e.g., network logs and application logs, for several reasons such as cheaper storage, forensic analysis, and regulatory compliance. Analyzing these big data sets to identify actionable security information and hence to improve enterprise security, however, is a relatively unexplored area. In this paper, we introduce a system to detect malicious domains accessed by an enterprise’s hosts from the enterprise’s HTTP proxy logs. Specifically, we model the detection problem as a graph inference problemwe construct a host-domain graph from proxy logs, seed the graph with minimal ground truth information, and then use belief propagation to estimate the marginal probability of a domain being malicious. Our experiments on data collected at a global enterprise show that our approach scales well, achieves high detection rates with low false positive rates, and identifies previously unknown malicious domains when compared with state-of-the-art systems. Since malware infections inside an enterprise spread primarily via malware domain accesses, our approach can be used to detect and prevent malware infections.", "Many botnet detection systems employ a blacklist of known command and control (C&C) domains to detect bots and block their traffic. Similar to signature-based virus detection, such a botnet detection approach is static because the blacklist is updated only after running an external (and often manual) process of domain discovery. As a response, botmasters have begun employing domain generation algorithms (DGAs) to dynamically produce a large number of random domain names and select a small subset for actual C&C use. That is, a C&C domain is randomly generated and used for a very short period of time, thus rendering detection approaches that rely on static domain lists ineffective. Naturally, if we know how a domain generation algorithm works, we can generate the domains ahead of time and still identify and block bot-net C&C traffic. The existing solutions are largely based on reverse engineering of the bot malware executables, which is not always feasible. In this paper we present a new technique to detect randomly generated domains without reversing. Our insight is that most of the DGA-generated (random) domains that a bot queries would result in Non-Existent Domain (NXDomain) responses, and that bots from the same bot-net (with the same DGA algorithm) would generate similar NXDomain traffic. Our approach uses a combination of clustering and classification algorithms. The clustering algorithm clusters domains based on the similarity in the make-ups of domain names as well as the groups of machines that queried these domains. The classification algorithm is used to assign the generated clusters to models of known DGAs. If a cluster cannot be assigned to a known model, then a new model is produced, indicating a new DGA variant or family. We implemented a prototype system and evaluated it on real-world DNS traffic obtained from large ISPs in North America. We report the discovery of twelve DGAs. Half of them are variants of known (botnet) DGAs, and the other half are brand new DGAs that have never been reported before.", "", "In this paper, we examine the challenges faced when evaluating botnet detection systems. Many of these challenges stem from difficulties in obtaining and sharing diverse sets of real network traces, as well as determining a botnet ground truth in such traces. On the one hand, there are good reasons why network traces should not be shared freely, such as privacy concerns, but on the other hand, the resulting data scarcity complicates quantitative comparisons to other work and conducting independently repeatable experiments. These challenges are similar to those faced by researchers studying large-scale distributed systems only a few years ago, and researchers were able to overcome many of the challenges by collaborating to create a global testbed, namely PlanetLab. We speculate that a similar system for botnet detection research could help overcome the challenges in this domain, and we briefly discuss the associated research directions.", "Recent botnets such as Conficker, Kraken, and Torpig have used DNS-based \"domain fluxing\" for command-and-control, where each Bot queries for existence of a series of domain names and the owner has to register only one such domain name. In this paper, we develop a methodology to detect such \"domain fluxes\" in DNS traffic by looking for patterns inherent to domain names that are generated algorithmically, in contrast to those generated by humans. In particular, we look at distribution of alphanumeric characters as well as bigrams in all domains that are mapped to the same set of IP addresses. We present and compare the performance of several distance metrics, including K-L distance, Edit distance, and Jaccard measure. We train by using a good dataset of domains obtained via a crawl of domains mapped to all IPv4 address space and modeling bad datasets based on behaviors seen so far and expected. We also apply our methodology to packet traces collected at a Tier-1 ISP and show we can automatically detect domain fluxing as used by Conficker botnet with minimal false positives, in addition to discovering a new botnet within the ISP trace. We also analyze a campus DNS trace to detect another unknown botnet exhibiting advanced domain-name generation technique.", "Advanced persistent threat (APT) is a serious threat to the Internet. With the aid of APT malware, attackers can remotely control infected machines and steal sensitive information. DNS is popular for malware to locate command and control (C&C) servers. In this paper, we propose a novel system placed at the network egress point that aims to efficiently and effectively detect APT malware infections based on malicious DNS and traffic analysis. The system uses malicious DNS analysis techniques to detect suspicious APT malware C&C domains, and then analyzes the traffic of the corresponding suspicious IP using the signature-based and anomaly based detection technology. We extracted 14 features based on big data to characterize different properties of malware-related DNS and the ways that they are queried, and we also defined network traffic features that can identify the traffic of compromised clients that have remotely been controlled. We built a reputation engine to compute a reputation score for an IP address using these features vector together. Our experiment was performed at a large local institute network for two months, and all the features were studied with big data, which includes @math million DNS queries. Our security approach cannot only substantially reduce the volume of network traffic that needs to be recorded and analyzed but also improve the sustainability of the system.", "As a key approach to securing large networks, existing anomaly detection techniques focus primarily on network traffic data. However, the sheer volume of such data often renders detailed analysis very expensive and reduces the effectiveness of these tools. In this paper, we propose a light-weight anomaly detection approach based on unproductive DNS traffic, namely, the failed DNS queries, with a novel tool - DNS failure graphs. A DNS failure graph captures the interactions between hosts and failed domain names. We apply a graph decomposition algorithm based on the tri-nonnegative matrix factorization technique to iteratively extract coherent co-clusters (dense subgraphs) from DNS failure graphs. By analyzing the co-clusters in the daily DNS failure graphs from a 3-month DNS trace captured at a large campus network, we find these co-clusters represent a variety of anomalous activities, e.g., spamming, trojans, bots, etc.. In addition, these activities often exhibit distinguishable subgraph structures. By exploring the temporal properties of the co-clusters, we show our method can identify new anomalies that likely correspond to unreported domain-flux bots.", "In recent years, an increasing number of botnets use Domain Generation Algorithms (DGAs) to bypass botnet detection systems. DGAs, also referred as “domain fluxing”, has been used since 2004 for botnet controllers, and now become an emerging trend for malware. It can dynamically and frequently generate a large number of random domain names which are used to prevent security systems from detecting and blocking. In this paper, we present a new technique to detect DGAs using DNS NXDomain traffic. Our insight is that every domain name in the domain group generated by one botnet using DGAs is often used for a short period of time, and has similar live time and query style. We look for this pattern in DNS NXDomain traffic to filter out algorithmically generated domains that DGA-based botnets generate. We implemented our protosystem and carry outexperiment at a pilot RDNS of an Internet operator. The results show that our method is of good effectiveness on detecting algorithmically generated domains used by botnet.", "Botnets such as Conficker and Torpig utilize high entropy domains for fluxing and evasion. Bots may query a large number of domains, some of which may fail. In this paper, we present techniques where the failed domain queries (NXDOMAIN) may be utilized for: (i) Speeding up the present detection strategies which rely only on successful DNS domains. (ii) Detecting Command and Control (C&C) server addresses through features such as temporal correlation and information entropy of both successful and failed domains. We apply our technique to a Tier-1 ISP dataset obtained from South Asia, and a campus DNS trace, and thus validate our methods by detecting Conficker botnet IPs and other anomalies with a false positive rate as low as 0.02 . Our technique can be applied at the edge of an autonomous system for real-time detection." ] }
1905.11880
2949927268
Modern malware can take various forms, and has reached a very high level of sophistication in terms of its penetration, persistence, communication and hiding capabilities. The use of cryptography, and of covert communication channels over public and widely used protocols and services, is becoming a norm. In this work, we start by introducing Resource Identifier Generation Algorithms. These are an extension of a well-known mechanism called Domain Generation Algorithms, which are frequently employed by cybercriminals for bot management and communication. Our extension allows, beyond DNS, the use of other protocols. More concretely, we showcase the exploitation of the InterPlanetary file system (IPFS). This is a solution for the "permanent web", which enjoys a steadily growing community interest and adoption. The IPFS is, in addition, one of the most prominent solutions for blockchain storage. We go beyond the straightforward case of using the IPFS for hosting malicious content, and explore ways in which a botmaster could employ it, to manage her bots, validating our findings experimentally. Finally, we discuss the advantages of our approach for malware authors, its efficacy and highlight its extensibility for other distributed storage services.
Wordlist-based DGAs appeared to overcome the drawbacks of the previous two DGA families. In this case, attackers adopt the use of English wordlists to generate human-readable'' domains, hindering previous DGA detection approaches. In this context, several neural network-based techniques exhibit good detection accuracy @cite_14 @cite_16 , as well as other novel methods based on metrics such as n-grams and word feature extraction @cite_41 @cite_28 .
{ "cite_N": [ "@cite_41", "@cite_28", "@cite_14", "@cite_16" ], "mid": [ "2894791732", "17316494", "2768793959", "" ], "abstract": [ "Modern malware typically makes use of a domain generation algorithm (DGA) to avoid command and control domains or IPs being seized or sinkholed. This means that an infected system may attempt to access many domains in an attempt to contact the command and control server. Therefore, the automatic detection of DGA domains is an important task, both for the sake of blocking malicious domains and identifying compromised hosts. However, many DGAs use English wordlists to generate plausibly clean-looking domain names; this makes automatic detection difficult. In this work, we devise a notion of difficulty for DGA families called the smashword score; this measures how much a DGA family looks like English words. We find that this measure accurately reflects how much a DGA family's domains look like they are made from natural English words. We then describe our new modeling approach, which is a combination of a novel recurrent neural network architecture with domain registration side information. Our experiments show the model is capable of effectively identifying domains generated by difficult DGA families. Our experiments also show that our model outperforms existing approaches, and is able to reliably detect difficult DGA families such as matsnu, suppobox, rovnix, and others. The model's performance compared to the state of the art is best for DGA families that resemble English words. We believe that this model could either be used in a standalone DGA domain detector---such as an endpoint security application---or alternately the model could be used as a part of a larger malware detection system.", "Modern botnets rely on domain-generation algorithms (DGAs) to build resilient command-and-control infrastructures. Given the prevalence of this mechanism, recent work has focused on the analysis of DNS traffic to recognize botnets based on their DGAs. While previous work has concentrated on detection, we focus on supporting intelligence operations. We propose Phoenix, a mechanism that, in addition to telling DGA- and non-DGA-generated domains apart using a combination of string and IP-based features, characterizes the DGAs behind them, and, most importantly, finds groups of DGA-generated domains that are representative of the respective botnets. As a result, Phoenix can associate previously unknown DGA-generated domains to these groups, and produce novel knowledge about the evolving behavior of each tracked botnet. We evaluated Phoenix on 1,153,516 domains, including DGA-generated domains from modern, well-known botnets: without supervision, it correctly distinguished DGA- vs. non-DGA-generated domains in 94.8 percent of the cases, characterized families of domains that belonged to distinct DGAs, and helped researchers “on the field” in gathering intelligence on suspicious domains to identify the correct botnet.", "In recent years, botnets have become a major threat on the Internet. Most sophisticated bots use Domain Generation Algorithms (DGA) to pseudo-randomly generate a large number of domains and select a subset in order to communicate with Command and Control (C&C) server. The basic aim is to avoid blacklisting, sinkholing and evade the security systems. Long Short-Term Memory network (LSTM) provides a mean to combat this botnet type. It operates on raw domains and is amenable to immediate applications. LSTM is however prone to multiclass imbalance problem, which becomes even more significant in DGA malware detection. This is due the fact that many DGA classes have a very little support in the training dataset. This paper presents a novel LSTM.MI algorithm to combine both binary and multiclass classification models, where the original LSTM is adapted to be cost-sensitive. The cost items are introduced into backpropagation learning procedure to take into account the identification importance among classes. Experiments are carried out on a real-world collected dataset. They demonstrate that LSTM.MI provides an improvement of at least 7 in terms of macro-averaging recall and precision as compared to the original LSTM and other state-of-the-art cost-sensitive methods. It is also able to preserve the high accuracy on non-DGA generated class (0.9849 F1-score), while helping recognize 5 additional bot families.", "" ] }
1905.11880
2949927268
Modern malware can take various forms, and has reached a very high level of sophistication in terms of its penetration, persistence, communication and hiding capabilities. The use of cryptography, and of covert communication channels over public and widely used protocols and services, is becoming a norm. In this work, we start by introducing Resource Identifier Generation Algorithms. These are an extension of a well-known mechanism called Domain Generation Algorithms, which are frequently employed by cybercriminals for bot management and communication. Our extension allows, beyond DNS, the use of other protocols. More concretely, we showcase the exploitation of the InterPlanetary file system (IPFS). This is a solution for the "permanent web", which enjoys a steadily growing community interest and adoption. The IPFS is, in addition, one of the most prominent solutions for blockchain storage. We go beyond the straightforward case of using the IPFS for hosting malicious content, and explore ways in which a botmaster could employ it, to manage her bots, validating our findings experimentally. Finally, we discuss the advantages of our approach for malware authors, its efficacy and highlight its extensibility for other distributed storage services.
Regardless of the DGA used, the typical botnets use Internet protocols for communication. Nevertheless, more sophisticated an original approaches can be found in the case of social networks @cite_25 @cite_27 and blockchain @cite_40 @cite_26 @cite_8 , which entail further detection challenges, since all traffic seems legitimate and is covered under standard channels such as HTTPS. In the case of blockchain networks, the problem is exacerbated due to its inherent properties such as immutability and anonymity.
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_40", "@cite_27", "@cite_25" ], "mid": [ "2885931936", "2795530089", "2621188356", "", "1497347928" ], "abstract": [ "In order for malicious software to receive configuration information or commands, malware needs to be able to locate and connect to its owner. As hard-coded addresses are easy to block and thus render the malware installation inoperable, malware writers have turned to dynamically generated addresses. Domain generation algorithms (DGA) generate a list of candidate domain names, each valid for only a short time, at which the malware installation searches for its command & control (C&C) server. As DGAs generate a large list of potential domains - out of which one or a few is actually in use –, they leave a characteristic trace of many failed DNS lookups (NXDomain) in the network, and in result most DGAs can be efficiently detected. In this paper we describe an entirely new principle of domain generation, actively deployed in the Cerber ransomware, which finds and coordinates with its owner based on transaction information in the bitcoin blockchain. This allows the malware author to dynamically update the location of the server in realtime, and as the malware directly goes to the right location no longer generates a sequence of NXDomain responses. We describe the concept of coordination via the blockchain, and report results on a year-long observation of the assets used in the Cerber campaign.", "Cyberattacks are nowadays moving rapidly. They are customized, multi-vector, staged in multiple flows and targeted. Moreover, new hacking playgrounds appeared to reach mobile network, modern architectures and smart cities. For that purpose, malware use different entry points and plug-ins. In addition, they are now deploying several techniques for obfuscation, camouflage and analysis resistance. On the other hand, antiviral protections are positioning innovative approaches exposing malicious indicators and anomalies, revealing assumptions of the limitations of the anti-antiviral mechanisms. Primarily, this paper exposes a state of art in computer virology and then introduces a new concept to create undetectable malware based on the blockchain technology. It summarizes techniques adopted by malicious software to avoid functionalities implemented for viral detection and presents the implementation of new viral techniques that leverage the blockchain network.", "Botnets are the preeminent source of online crime and arguably one of the greatest threats to the Internet infrastructure. In this paper, we present ZombieCoin, a botnet command-and-control (C&C) mechanism that leverages the Bitcoin network. ZombieCoin offers considerable advantages over existing C&C techniques, most notably the fact that Bitcoin is designed to resist the very same takedown campaigns and regulatory processes that are the most often-used methods to combat botnets today. Furthermore, we describe how the Bitcoin network enables novel C&C techniques, which dramatically expand the scope of this threat, including the possibilities of flexible rendezvous scheduling, efficient botnet partitioning, and fine-grained control over bots. We validate our claims by implementing ZombieCoin bots which we then deploy and successfully control over the Bitcoin network. Our findings lead us to believe that Bitcoin-based C&C mechanisms are a highly desirable option that botmasters will pursue in the near future. We hope our study provides a useful first step towards devising effective countermeasures for this threat.", "", "We propose Stegobot, a new generation botnet that communicates over probabilistically unobservable communication channels. It is designed to spread via social malware attacks and steal information from its victims. Unlike conventional botnets, Stegobot traffic does not introduce new communication endpoints between bots. Instead, it is based on a model of covert communication over a social-network overlay - bot to botmaster communication takes place along the edges of a social network. Further, bots use image steganography to hide the presence of communication within image sharing behavior of user interaction. We show that it is possible to design such a botnet even with a less than optimal routing mechanism such as restricted flooding. We analyzed a real-world dataset of image sharing between members of an online social network. Analysis of Stegobot's network throughput indicates that stealthy as it is, it is also functionally powerful - capable of channeling fair quantities of sensitive data from its victims to the botmaster at tens of megabytes every month." ] }
1905.11880
2949927268
Modern malware can take various forms, and has reached a very high level of sophistication in terms of its penetration, persistence, communication and hiding capabilities. The use of cryptography, and of covert communication channels over public and widely used protocols and services, is becoming a norm. In this work, we start by introducing Resource Identifier Generation Algorithms. These are an extension of a well-known mechanism called Domain Generation Algorithms, which are frequently employed by cybercriminals for bot management and communication. Our extension allows, beyond DNS, the use of other protocols. More concretely, we showcase the exploitation of the InterPlanetary file system (IPFS). This is a solution for the "permanent web", which enjoys a steadily growing community interest and adoption. The IPFS is, in addition, one of the most prominent solutions for blockchain storage. We go beyond the straightforward case of using the IPFS for hosting malicious content, and explore ways in which a botmaster could employ it, to manage her bots, validating our findings experimentally. Finally, we discuss the advantages of our approach for malware authors, its efficacy and highlight its extensibility for other distributed storage services.
For a detailed overview and classification of methods of how malicious domains can be detected, the interested reader may refer to @cite_34 .
{ "cite_N": [ "@cite_34" ], "mid": [ "2804240301" ], "abstract": [ "Malicious domains are one of the major resources required for adversaries to run attacks over the Internet. Due to the important role of the Domain Name System (DNS), extensive research has been conducted to identify malicious domains based on their unique behavior reflected in different phases of the life cycle of DNS queries and responses. Existing approaches differ significantly in terms of intuitions, data analysis methods as well as evaluation methodologies. This warrants a thorough systematization of the approaches and a careful review of the advantages and limitations of every group. In this article, we perform such an analysis. To achieve this goal, we present the necessary background knowledge on DNS and malicious activities leveraging DNS. We describe a general framework of malicious domain detection techniques using DNS data. Applying this framework, we categorize existing approaches using several orthogonal viewpoints, namely (1) sources of DNS data and their enrichment, (2) data analysis methods, and (3) evaluation strategies and metrics. In each aspect, we discuss the important challenges that the research community should address in order to fully realize the power of DNS data analysis to fight against attacks leveraging malicious domains." ] }
1905.11880
2949927268
Modern malware can take various forms, and has reached a very high level of sophistication in terms of its penetration, persistence, communication and hiding capabilities. The use of cryptography, and of covert communication channels over public and widely used protocols and services, is becoming a norm. In this work, we start by introducing Resource Identifier Generation Algorithms. These are an extension of a well-known mechanism called Domain Generation Algorithms, which are frequently employed by cybercriminals for bot management and communication. Our extension allows, beyond DNS, the use of other protocols. More concretely, we showcase the exploitation of the InterPlanetary file system (IPFS). This is a solution for the "permanent web", which enjoys a steadily growing community interest and adoption. The IPFS is, in addition, one of the most prominent solutions for blockchain storage. We go beyond the straightforward case of using the IPFS for hosting malicious content, and explore ways in which a botmaster could employ it, to manage her bots, validating our findings experimentally. Finally, we discuss the advantages of our approach for malware authors, its efficacy and highlight its extensibility for other distributed storage services.
The InterPlanetary File System (IPFS) @cite_33 is a distributed P2P system for retrieving and sharing IPFS objects. IPFS uses a Merkle Directed Acyclic Graph (DAG), which is a cryptographically authenticated data structure, to address such objects. Therefore, instead of identifying objects by their location (e.g. HTTPS), the system addresses them by their representation of the content itself, which is usually their Base58 SHA-256 encoded hash https: en.bitcoin.it wiki Base58Check_encoding#Base58_symbol_chart .
{ "cite_N": [ "@cite_33" ], "mid": [ "1844512392" ], "abstract": [ "The InterPlanetary File System (IPFS) is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository. In other words, IPFS provides a high throughput content-addressed block storage model, with content-addressed hyper links. This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web. IPFS combines a distributed hashtable, an incentivized block exchange, and a self-certifying namespace. IPFS has no single point of failure, and nodes do not need to trust each other." ] }
1905.11596
2947656526
Recommendation models mainly deal with categorical variables, such as user item ID and attributes. Besides the high-cardinality issue, the interactions among such categorical variables are usually long-tailed, with the head made up of highly frequent values and a long tail of rare ones. This phenomenon results in the data sparsity issue, making it essential to regularize the models to ensure generalization. The common practice is to employ grid search to manually tune regularization hyperparameters based on the validation data. However, it requires non-trivial efforts and large computation resources to search the whole candidate space; even so, it may not lead to the optimal choice, for which different parameters should have different regularization strengths. In this paper, we propose a hyperparameter optimization method, LambdaOpt, which automatically and adaptively enforces regularization during training. Specifically, it updates the regularization coefficients based on the performance of validation data. With LambdaOpt, the notorious tuning of regularization hyperparameters can be avoided; more importantly, it allows fine-grained regularization (i.e. each parameter can have an individualized regularization coefficient), leading to better generalized models. We show how to employ LambdaOpt on matrix factorization, a classical model that is representative of a large family of recommender models. Extensive experiments on two public benchmarks demonstrate the superiority of our method in boosting the performance of top-K recommendation.
* . The close work is SGDA @cite_31 , where adaptive regularization for rating prediction is achieved by alternating optimization for model parameters @math and regularization coefficients @math . Similar validation set based alternating optimization method has also been proposed in @cite_4 . Both work focused on the reduced computation complexity while ignoring the potential performance boost. @cite_31 only considers dimension-wise @math , which might be the reason why the algorithm does not outperform the best fixed @math algorithm in the reported experimental results. Instead, our work shows the effectiveness of incorporating fine-grained regularization. Besides, our method is more generic in terms of endowing the practitioners more freedom in optimizer choices while SGDA @cite_31 applies only to SGD optimizers.
{ "cite_N": [ "@cite_31", "@cite_4" ], "mid": [ "2112519400", "2286376758" ], "abstract": [ "Many factorization models like matrix or tensor factorization have been proposed for the important application of recommender systems. The success of such factorization models depends largely on the choice of good values for the regularization parameters. Without a careful selection they result in poor prediction quality as they either underfit or overfit the data. Regularization values are typically determined by an expensive search that requires learning the model parameters several times: once for each tuple of candidate values for the regularization parameters. In this paper, we present a new method that adapts the regularization automatically while training the model parameters. To achieve this, we optimize simultaneously for two criteria: (1) as usual the model parameters for the regularized objective and (2) the regularization of future parameter updates for the best predictive quality on a validation set. We develop this for the generic model class of Factorization Machines which subsumes a wide variety of factorization models. We show empirically, that the advantages of our adaptive regularization method compared to expensive hyperparameter search do not come to the price of worse predictive quality. In total with our method, learning regularization parameters is as easy as learning model parameters and thus there is no need for any time-consuming search of regularization values because they are found on-the-fly. This makes our method highly attractive for practical use.", "Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the validation cost. We explore the approach for tuning regularization hyperparameters and find that in experiments on MNIST, SVHN and CIFAR-10, the resulting regularization levels are within the optimal regions. The additional computational cost depends on how frequently the hyperparameters are trained, but the tested scheme adds only 30 computational overhead regardless of the model size. Since the method is significantly less computationally demanding compared to similar gradient-based approaches to hyperparameter optimization, and consistently finds good hyperparameter values, it can be a useful tool for training neural network models." ] }
1905.11596
2947656526
Recommendation models mainly deal with categorical variables, such as user item ID and attributes. Besides the high-cardinality issue, the interactions among such categorical variables are usually long-tailed, with the head made up of highly frequent values and a long tail of rare ones. This phenomenon results in the data sparsity issue, making it essential to regularize the models to ensure generalization. The common practice is to employ grid search to manually tune regularization hyperparameters based on the validation data. However, it requires non-trivial efforts and large computation resources to search the whole candidate space; even so, it may not lead to the optimal choice, for which different parameters should have different regularization strengths. In this paper, we propose a hyperparameter optimization method, LambdaOpt, which automatically and adaptively enforces regularization during training. Specifically, it updates the regularization coefficients based on the performance of validation data. With LambdaOpt, the notorious tuning of regularization hyperparameters can be avoided; more importantly, it allows fine-grained regularization (i.e. each parameter can have an individualized regularization coefficient), leading to better generalized models. We show how to employ LambdaOpt on matrix factorization, a classical model that is representative of a large family of recommender models. Extensive experiments on two public benchmarks demonstrate the superiority of our method in boosting the performance of top-K recommendation.
* . Finding good regularization coefficients can be part of the overall hyperparameters optimization (HO). Typically, grid-search-like methods are used where people monitor performance on the validation set and choose the best set of hyperparameters from a bunch of candidates. These methods are simple and generic, capable of being applied to any task and any model, ranging from SVM @cite_1 to decision trees. Random search could be very time-consuming. Previous work @cite_15 @cite_5 @cite_8 @cite_13 @cite_16 have dedicated to lower down the nontrivial search cost, along with developing some enhanced toolboxes @cite_18 @cite_30 @cite_34 . However, most of them require multiple full training runs instead of learning to regularize on the fly. Recently, @cite_22 explored bilevel programming to unify gradient based HO and meta-learning. . Applying them to tuning the regularization coefficients for recommendation might not work well due to some characteristics, i.e., data sparsity issue, in recommender systems. In contrast, our algorithms are tailored to recommendation, where users items are highly heterogeneous.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_22", "@cite_8", "@cite_1", "@cite_34", "@cite_5", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2189149359", "2136213019", "2963306862", "", "2109943925", "24956509", "", "2950182411", "", "" ], "abstract": [ "Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. The paper closes with some discussion of ongoing and future work.", "BayesOpt is a library with state-of-the-art Bayesian optimization methods to solve nonlinear optimization, stochastic bandits or sequential experimental design problems. Bayesian optimization characterized for being sample efficient as it builds a posterior distribution to capture the evidence and prior knowledge of the target function. Built in standard C++, the library is extremely efficient while being portable and flexible. It includes a common interface for C, C++, Python, Matlab and Octave.", "", "", "Support vector machine (SVM) is a popular technique for classication. However, beginners who are not familiar with SVM often get unsatisfactory results since they miss some easy but signicant steps. In this guide, we propose a simple procedure, which usually gives reasonable results.", "Optunity is a free software package dedicated to hyperparameter optimization. It contains various types of solvers, ranging from undirected methods to direct search, particle swarm and evolutionary optimization. The design focuses on ease of use, flexibility, code clarity and interoperability with existing software in all machine learning environments. Optunity is written in Python and contains interfaces to environments such as R and MATLAB. Optunity uses a BSD license and is freely available online at this http URL", "", "Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a \"black art\" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.", "", "" ] }
1905.11596
2947656526
Recommendation models mainly deal with categorical variables, such as user item ID and attributes. Besides the high-cardinality issue, the interactions among such categorical variables are usually long-tailed, with the head made up of highly frequent values and a long tail of rare ones. This phenomenon results in the data sparsity issue, making it essential to regularize the models to ensure generalization. The common practice is to employ grid search to manually tune regularization hyperparameters based on the validation data. However, it requires non-trivial efforts and large computation resources to search the whole candidate space; even so, it may not lead to the optimal choice, for which different parameters should have different regularization strengths. In this paper, we propose a hyperparameter optimization method, LambdaOpt, which automatically and adaptively enforces regularization during training. Specifically, it updates the regularization coefficients based on the performance of validation data. With LambdaOpt, the notorious tuning of regularization hyperparameters can be avoided; more importantly, it allows fine-grained regularization (i.e. each parameter can have an individualized regularization coefficient), leading to better generalized models. We show how to employ LambdaOpt on matrix factorization, a classical model that is representative of a large family of recommender models. Extensive experiments on two public benchmarks demonstrate the superiority of our method in boosting the performance of top-K recommendation.
* . Embedding technique is widely used to project categorical values into a latent vector space @cite_9 . In natural language processing, training large embeddings usually requires suitable regularization @cite_14 . Training recommender models also involves regularizing large embedding matrices, such as the user item embedding matrix. Although the tasks are different, the basic regularization strategies and their analysis might be similar. A cross-sectional study across them would be interesting and meaningful in terms of deriving a generic regularization method for embeddings. Since parameters initialization can be regarded as a special regularization, embedding initialization methods like @cite_33 are also worth exploring.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_33" ], "mid": [ "2464785945", "2963840901", "2941014662" ], "abstract": [ "We map categorical variables in a function approximation problem into Euclidean spaces, which are the entity embeddings of the categorical variables. The mapping is learned by a neural network during the standard supervised training process. Entity embedding not only reduces memory usage and speeds up neural networks compared with one-hot encoding, but more importantly by mapping similar values close to each other in the embedding space it reveals the intrinsic properties of the categorical variables. We applied it successfully in a recent Kaggle competition and were able to reach the third position with relative simple features. We further demonstrate in this paper that entity embedding helps the neural network to generalize better when the data is sparse and statistics is unknown. Thus it is especially useful for datasets with lots of high cardinality features, where other methods tend to overfit. We also demonstrate that the embeddings obtained from the trained neural network boost the performance of all tested machine learning methods considerably when used as the input features instead. As entity embedding defines a distance measure for categorical variables it can be used for visualizing categorical data and for data clustering.", "This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, reembedding words, and dropout. We also emphasized on incremental hyperparameter tuning, and combining different regularizations. The results provide a picture on tuning hyperparameters for neural NLP models.", "Click-through rate (CTR) prediction has been one of the most central problems in computational advertising. Lately, embedding techniques that produce low-dimensional representations of ad IDs drastically improve CTR prediction accuracies. However, such learning techniques are data demanding and work poorly on new ads with little logging data, which is known as the cold-start problem. In this paper, we aim to improve CTR predictions during both the cold-start phase and the warm-up phase when a new ad is added to the candidate pool. We propose Meta-Embedding, a meta-learning-based approach that learns to generate desirable initial embeddings for new ad IDs. The proposed method trains an embedding generator for new ad IDs by making use of previously learned ads through gradient-based meta-learning. In other words, our method learns how to learn better embeddings. When a new ad comes, the trained generator initializes the embedding of its ID by feeding its contents and attributes. Next, the generated embedding can speed up the model fitting during the warm-up phase when a few labeled examples are available, compared to the existing initialization methods. Experimental results on three real-world datasets showed that Meta-Embedding can significantly improve both the cold-start and warm-up performances for six existing CTR prediction models, ranging from lightweight models such as Factorization Machines to complicated deep models such as PNN and DeepFM. All of the above apply to conversion rate (CVR) predictions as well." ] }
1905.11607
2952805156
We propose a new protocol for the generalized consensus problem in asynchronous systems subject to Byzantine server failures. The protocol solves the consensus problem in a setting in which information about conflict between transactions is available (such information can be in the form of transaction read and write sets). Unlike most prior proposals (for generalized or classical consensus), which use a leader to order transactions, this protocol is leaderless, and relies on non-skipping timestamps for transaction ordering. Being leaderless, the protocol does not need to pause for leader elections. The use of non-skipping timestamps permits servers to commit transactions as soon as they know that no conflicting transaction can be ordered earlier. For n servers of which f may be faulty, this protocol requires n > 4f.
One BFT protocol that does not use leaders or view changes is @cite_5 . Unlike most BFT protocols, HoneyBadgerBFT does not assume eventual (or partial) synchrony, but relies on a randomized atomic broadcast protocol with a cryptographic shared coin. HoneyBadgerBFT ensures censorship resistance through a cryptographic subprotocol. Unlike Byblos, HoneyBadgerBFT does not exploit transaction semantics. The BFT protocol @cite_23 uses multiple leaders, who track one another, and provide censorship resistance. It is designed for systems in which clients can have multiple parallel pending requests. Aublin @cite_28 describe a family of protocols, some of which have low (2-message) latency in synchronous executions.
{ "cite_N": [ "@cite_28", "@cite_5", "@cite_23" ], "mid": [ "2124037649", "2534313446", "2013608896" ], "abstract": [ "We present Abstract (ABortable STate mAChine replicaTion), a new abstraction for designing and reconfiguring generalized replicated state machines that are, unlike traditional state machines, allowed to abort executing a client’s request if “something goes wrong.” Abstract can be used to considerably simplify the incremental development of efficient Byzantine fault-tolerant state machine replication ( BFT ) protocols that are notorious for being difficult to develop. In short, we treat a BFT protocol as a composition of Abstract instances. Each instance is developed and analyzed independently and optimized for specific system conditions. We illustrate the power of Abstract through several interesting examples. We first show how Abstract can yield benefits of a state-of-the-art BFT protocol in a less painful and error-prone manner. Namely, we develop AZyzzyva, a new protocol that mimics the celebrated best-case behavior of Zyzzyva using less than 35p of the Zyzzyva code. To cover worst-case situations, our abstraction enables one to use in AZyzzyva any existing BFT protocol. We then present Aliph, a new BFT protocol that outperforms previous BFT protocols in terms of both latency (by up to 360p) and throughput (by up to 30p). Finally, we present R-Aliph, an implementation of Aliph that is robust, that is, whose performance degrades gracefully in the presence of Byzantine replicas and Byzantine clients.", "The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network.", "Byzantine Fault Tolerant state machine replication (BFT) protocols are replication protocols that tolerate arbitrary faults of a fraction of the replicas. Although significant efforts have been recently made, existing BFT protocols do not provide acceptable performance when faults occur. As we show in this paper, this comes from the fact that all existing BFT protocols targeting high throughput use a special replica, called the primary, which indicates to other replicas the order in which requests should be processed. This primary can be smartly malicious and degrade the performance of the system without being detected by correct replicas. In this paper, we propose a new approach, called RBFT for Redundant-BFT: we execute multiple instances of the same BFT protocol, each with a primary replica executing on a different machine. All the instances order the requests, but only the requests ordered by one of the instances, called the master instance, are actually executed. The performance of the different instances is closely monitored, in order to check that the master instance provides adequate performance. If that is not the case, the primary replica of the master instance is considered malicious and replaced. We implemented RBFT and compared its performance to that of other existing robust protocols. Our evaluation shows that RBFT achieves similar performance as the most robust protocols when there is no failure and that, under faults, its maximum performance degradation is about 3 , whereas it is at least equal to 78 for existing protocols." ] }
1905.11607
2952805156
We propose a new protocol for the generalized consensus problem in asynchronous systems subject to Byzantine server failures. The protocol solves the consensus problem in a setting in which information about conflict between transactions is available (such information can be in the form of transaction read and write sets). Unlike most prior proposals (for generalized or classical consensus), which use a leader to order transactions, this protocol is leaderless, and relies on non-skipping timestamps for transaction ordering. Being leaderless, the protocol does not need to pause for leader elections. The use of non-skipping timestamps permits servers to commit transactions as soon as they know that no conflicting transaction can be ordered earlier. For n servers of which f may be faulty, this protocol requires n > 4f.
As noted, the protocols discussed, with the exception of EPaxos @cite_13 which only tolerates crash failures, do not solve the generalized consensus problem @cite_0 @cite_32 . Abd-El- @cite_11 propose a client-driven quorum-based protocol called Q U that is very efficient under low contention, but that requires @math and can suffer from livelock due to contention even in synchronous periods. The algorithm is leaderless and uses exponential backoff in the presence of contention. Cowling @cite_33 aims at improving Q U by reverting to using a leader. Recently @cite_26 proposed a leader-based Byzantine version of generalized Paxos.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_32", "@cite_0", "@cite_13", "@cite_11" ], "mid": [ "2746579076", "2129467152", "2144606173", "2106670435", "2067740651", "" ], "abstract": [ "One of the most recent members of the Paxos family of protocols is Generalized Paxos. This variant of Paxos has the characteristic that it departs from the original specification of consensus, allowing for a weaker safety condition where different processes can have different views on a sequence being agreed upon. However, much like the original Paxos counterpart, Generalized Paxos does not have a simple implementation. Furthermore, with the recent practical adoption of Byzantine fault tolerant protocols, it is timely and important to understand how Generalized Paxos can be implemented in the Byzantine model. In this paper, we make two main contributions. First, we provide a description of Generalized Paxos that is easier to understand, based on a simpler specification and the pseudocode for a solution that can be readily implemented. Second, we extend the protocol to the Byzantine fault model.", "There are currently two approaches to providing Byzantine-fault-tolerant state machine replication: a replica-based approach, e.g., BFT, that uses communication between replicas to agree on a proposed ordering of requests, and a quorum-based approach, such as Q U, in which clients contact replicas directly to optimistically execute operations. Both approaches have shortcomings: the quadratic cost of inter-replica communication is un-necessary when there is no contention, and Q U requires a large number of replicas and performs poorly under contention. We present HQ, a hybrid Byzantine-fault-tolerant state machine replication protocol that overcomes these problems. HQ employs a lightweight quorum-based protocol when there is no contention, but uses BFT to resolve contention when it arises. Furthermore, HQ uses only 3f + 1 replicas to tolerate f faults, providing optimal resilience to node failures. We implemented a prototype of HQ, and we compare its performance to BFT and Q U analytically and experimentally. Additionally, in this work we use a new implementation of BFT designed to scale as the number of faults increases. Our results show that both HQ and our new implementation of BFT scale as f increases; additionally our hybrid approach of using BFT to handle contention works well.", "Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely \"syntactic,\" that is, message \"semantics\" is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast.", "Theoretician’s Abstract Consensus has been regarded as the fundamental problem that must be solved to implement a fault-tolerant distributed system. However, only a weaker problem than traditional consensus need be solved. We generalize the consensus problem to include both traditional consensus and this weaker version. A straightforward generalization of the Paxos consensus algorithm implements general consensus. The generalizations of consensus and of the Paxos algorithm require a mathematical detour de force into a type of object called a command-structure set.", "This paper describes the design and implementation of Egalitarian Paxos (EPaxos), a new distributed consensus algorithm based on Paxos. EPaxos achieves three goals: (1) optimal commit latency in the wide-area when tolerating one and two failures, under realistic conditions; (2) uniform load balancing across all replicas (thus achieving high throughput); and (3) graceful performance degradation when replicas are slow or crash. Egalitarian Paxos is to our knowledge the first protocol to achieve the previously stated goals efficiently---that is, requiring only a simple majority of replicas to be non-faulty, using a number of messages linear in the number of replicas to choose a command, and committing commands after just one communication round (one round trip) in the common case or after at most two rounds in any case. We prove Egalitarian Paxos's properties theoretically and demonstrate its advantages empirically through an implementation running on Amazon EC2.", "" ] }
1905.11678
2947889298
Connectivity between different brain regions is one of the most important properties for classification of brain signals including electroencephalography (EEG). However, how to define the connectivity structure for a given task is still an open problem, because there is no ground truth about how the connectivity structure should be in order to maximize the performance. In this paper, we propose an end-to-end neural network model for EEG classification, which can extract an appropriate multi-layer graph structure and signal features directly from a set of raw EEG signals and perform classification. Experimental results demonstrate that our method yields improved performance in comparison to the existing approaches where manually defined connectivity structures and signal features are used. Furthermore, we show that the graph structure extraction process is reliable in terms of consistency, and the learned graph structures make much sense in the neuroscientific viewpoint.
Graph neural networks (GNNs) @cite_0 @cite_22 @cite_34 @cite_8 are a type of neural networks for graph data, which can be used to directly model connectivity-based graph representations of brain signals as shown in @cite_35 @cite_2 @cite_3 . However, they have a limitation in that appropriate graph structures still need to be designed manually.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_8", "@cite_3", "@cite_0", "@cite_2", "@cite_34" ], "mid": [ "2805050064", "2964015378", "2606780347", "", "2907492528", "", "2964321699" ], "abstract": [ "This paper proposes a novel graph signal-based deep learning method for electroencephalography (EEG) and its application to EEG-based video identification. We present new methods to effectively represent EEG data as signals on graphs, and learn them using graph convolutional neural networks. Experimental results for video identification using EEG responses obtained while watching videos show the effectiveness of the proposed approach in comparison to existing methods. Effective schemes for graph signal representation of EEG are also discussed.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels.", "", "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this rapidly growing field.", "", "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
An initial introduction and guideline have been done by @cite_13 who are mainly interested in client-observable characteristics of Cloud Services. The authors cover all aspects of Cloud service benchmarking including motivation, benchmarking design and execution, and the use of the results. The authors point out that Cloud benchmarking is important as applications depends more and more on Cloud services. However, this work describes a general picture of Cloud benchmarking but does not focus on FaaS, in particular, and specific characteristics of this new Cloud Service.
{ "cite_N": [ "@cite_13" ], "mid": [ "2742980741" ], "abstract": [ "Cloud service benchmarking can provide important, sometimes surprising insights into the quality of services and leads to a more quality-driven design and engineering of complex software architectures that use such services. Starting with a broad introduction to the field, this book guides readers step-by-step through the process of designing, implementing and executing a cloud service benchmark, as well as understanding and dealing with its results. It covers all aspects of cloud service benchmarking, i.e., both benchmarking the cloud and benchmarking in the cloud, at a basic level. The book is divided into five parts: Part I discusses what cloud benchmarking is, provides an overview of cloud services and their key properties, and describes the notion of a cloud system and cloud-service quality. It also addresses the benchmarking lifecycle and the motivations behind running benchmarks in particular phases of an application lifecycle. Part II then focuses on benchmark design by discussing key objectives (e.g., repeatability, fairness, or understandability) and defining metrics and measurement methods, and by giving advice on developing own measurement methods and metrics. Next, Part III explores benchmark execution and implementation challenges and objectives as well as aspects like runtime monitoring and result collection. Subsequently, Part IV addresses benchmark results, covering topics such as an abstract process for turning data into insights, data preprocessing, and basic data analysis methods. Lastly, Part V concludes the book with a summary, suggestions for further reading and pointers to benchmarking tools available on the Web. The book is intended for researchers and graduate students of computer science and related subjects looking for an introduction to benchmarking cloud services, but also for industry practitioners who are interested in evaluating the quality of cloud services or who want to assess key qualities of their own implementations through cloud-based experiments." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
@cite_12 identify the need to benchmark different qualities and features of FaaS platforms and present a set of benchmarking approaches. They present a preliminary results for a systematic literature review in support of benchmarking FaaS platforms. The results show that no standardized and industry-wide Benchmark suite exists for measuring the performance and capabilities of FaaS implementations. Their results indicate a lack of benchmarks that observe functions not in an isolated but in a shared environment of a Cloud Service.
{ "cite_N": [ "@cite_12" ], "mid": [ "2909714733" ], "abstract": [ "The number of available FaaS platforms increases with the rising popularity of a \"serverless\" architecture and development paradigm. As a consequence, a high demand for benchmarking FaaS platforms exists. In response to this demand, new benchmarking approaches that focus on different objectives continuously emerge. In this paper, we call for community participation to conduct a collaborative systematic literature review with the goal to establish a community-driven knowledge base." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
@cite_9 analyze several resource-intensive tasks in terms of comparing FaaS models with conventional monolithic algorithms. The authors conduct several experiments and compare the performance and other resource-related characteristics. The results demonstrate that solutions for scientific and high-performance computing can be realized by Cloud Functions. The authors mainly focus on computing intensive tasks (e.g. face detection, calculation of @math , etc.) in the domain of scientific and high-performance computing but they do not take other FaaS qualities of interest into account. For example, FaaS environments offer a timeout parameter, which specifies the maximum time for code execution. After this time, any code execution stops and the FaaS platform returns an error status. A timeout might become crucial when it comes to Cloud Function pricing since CSC is charged for the entire time that the Cloud Function executes. Therefore, a lack of precision can lead to early and misleading timeouts during code execution, which, in turn, can end in uncompleted tasks and high costs.
{ "cite_N": [ "@cite_9" ], "mid": [ "2777322518" ], "abstract": [ "The adoption of cloud computing facilities and programming models differs vastly between different application domains. Scalable web applications, low-latency mobile backends and on-demand provisioned databases are typical cases for which cloud services on the platform or infrastructure level exist and are convincing when considering technical and economical arguments. Applications with specific processing demands, including high-performance computing, high-throughput computing and certain flavours of scientific computing, have historically required special configurations such as compute- or memory-optimised virtual machine instances. With the rise of function-level compute instances through Function-as-a-Service (FaaS) models, the fitness of generic configurations needs to be re-evaluated for these applications. We analyse several demanding computing tasks with regards to how FaaS models compare against conventional monolithic algorithm execution. Beside the comparison, we contribute a refined FaaSification process for legacy software and provide a roadmap for future work." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
@cite_7 present a performance-oriented serverless computing platform to study serverless implementation considerations. Their prototype provides important insights how functions are managed and executed in serverless environments. The authors also discuss several implementation challenges such as function scaling and container discovery in detail. Furthermore, they propose useful metric to evalute the performance of serverless platform. The presented prototype and its internal architecture, in particular, helped us to design and implement the FaaS Benchmarking Framework.
{ "cite_N": [ "@cite_7" ], "mid": [ "2735574181" ], "abstract": [ "We present the design of a novel performance-oriented serverless computing platform implemented in. NET, deployed in Microsoft Azure, and utilizing Windows containers as function execution environments. Implementation challenges such as function scaling and container discovery, lifecycle, and reuse are discussed in detail. We propose metrics to evaluate the execution performance of serverless platforms and conduct tests on our prototype as well as AWS Lambda, Azure Functions, Google Cloud Functions, and IBM's deployment of Apache OpenWhisk. Our measurements show the prototype achieving greater throughput than other platforms at most concurrency levels, and we examine the scaling and instance expiration trends in the implementations. Additionally, we discuss the gaps and limitations in our current design, propose possible solutions, and highlight future research." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
@cite_8 present an approach for performance evaluation of Cloud functions and take the heterogeneity aspects into account. For this purpose, the authors developed a framework with two suites of computing-intensive benchmarks for performance evaluation of Cloud functions. Their results show the heterogeneity of Cloud Function Providers, the relation between function size and performance and how providers interpret the resource allocation policies differently. The authors conclude that there is need of research that should analyse the impact of parallelism, delays, and warm-up on performance.
{ "cite_N": [ "@cite_8" ], "mid": [ "2786869547" ], "abstract": [ "Cloud Functions, often called Function-as-a-Service (FaaS), pioneered by AWS Lambda, are an increasingly popular method of running distributed applications. As in other cloud offerings, cloud functions are heterogeneous, due to different underlying hardware, runtime systems, as well as resource management and billing models. In this paper, we focus on performance evaluation of cloud functions, taking into account heterogeneity aspects. We developed a cloud function benchmarking framework, consisting of one suite based on Serverless Framework, and one based on HyperFlow. We deployed the CPU-intensive benchmarks: Mersenne Twister and Linpack, and evaluated all the major cloud function providers: AWS Lambda, Azure Functions, Google Cloud Functions and IBM OpenWhisk. We make our results available online and continuously updated. We report on the initial results of the performance evaluation and we discuss the discovered insights on the resource allocation policies." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
@cite_1 study the factors that influence the performance of microservices provided by serverless platforms. In detail, the authors focus on infrastructure elasticity, load balancing, provisioning, infrastructure retention, and memory reservation. For this purpose, @cite_1 implement two dedicated functions, which are executed on the platforms Azure Functions and AWS Lambda . This approach is useful when comparing the performance of different FaaS platforms but it does not take the performance of business-related Cloud Functions into account. Therefore, the FaaS Benchmark Framework presented in this paper is able to benchmark Cloud Functions which are not specifically adapted for benchmarking. Instead, it is also applicable for performance evaluation of production-related Cloud Functions including the underlying FaaS platforms.
{ "cite_N": [ "@cite_1" ], "mid": [ "2804874586" ], "abstract": [ "Serverless computing platforms provide function(s)-as-a-Service (FaaS) to end users while promising reduced hosting costs, high availability, fault tolerance, and dynamic elasticity for hosting individual functions known as microservices. Serverless Computing environments, unlike Infrastructure-as-a-Service (IaaS) cloud platforms, abstract infrastructure management including creation of virtual machines (VMs), operating system containers, and request load balancing from users. To conserve cloud server capacity and energy, cloud providers allow hosting infrastructure to go COLD, deprovisioning containers when service demand is low freeing infrastructure to be harnessed by others. In this paper, we present results from our comprehensive investigation into the factors which influence microservice performance afforded by serverless computing. We examine hosting implications related to infrastructure elasticity, load balancing, provisioning variation, infrastructure retention, and memory reservation size. We identify four states of serverless infrastructure including: provider cold, VM cold, container cold, and warm and demonstrate how microservice performance varies up to 15x based on these states." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
@cite_10 present a generic Cloud Performance Model and provide a summary of useful Cloud Performance Metrics (CPM) on three levels: Basic performance metrics, Cloud capabilities, and Cloud productivity. The Basic performance metrics include traditional metrics such as execution time or speed. The Cloud capabilities describe throughput, bandwidth, and network latency. Finally, Cloud productivity deals with productivity metrics such as Quality of Service (QoS), Service Level Agreement (SLA) and security. The authors encourage the Cloud community to test Cloud capability in big-data analytics and machine learning intelligence. In particular, they argue that the Cloud community is short of benchmarking tests. In this regard, the authors motivated us to develop the FaaS Benchmarking Framework and to adapt existing tests for FaaS platforms. This includes also the analysis and feasibility of appropriate FaaS performance metrics.
{ "cite_N": [ "@cite_10" ], "mid": [ "2038249652" ], "abstract": [ "In this paper, we present generic cloud performance models for evaluating Iaas, PaaS, SaaS, and mashup or hybrid clouds. We test clouds with real-life benchmark programs and propose some new performance metrics. Our benchmark experiments are conducted mainly on IaaS cloud platforms over scale-out and scale-up workloads. Cloud benchmarking results are analyzed with the efficiency, elasticity, QoS, productivity, and scalability of cloud performance. Five cloud benchmarks were tested on Amazon IaaS EC2 cloud: namely YCSB, CloudSuite, HiBench, BenchClouds, and TPC-W. To satisfy production services, the choice of scale-up or scale-out solutions should be made primarily by the workload patterns and resources utilization rates required. Scaling-out machine instances have much lower overhead than those experienced in scale-up experiments. However, scaling up is found more cost-effective in sustaining heavier workload. The cloud productivity is greatly attributed to system elasticity, efficiency, QoS and scalability. We find that auto-scaling is easy to implement but tends to over provision the resources. Lower resource utilization rate may result from auto-scaling, compared with using scale-out or scale-up strategies. We also demonstrate that the proposed cloud performance models are applicable to evaluate PaaS, SaaS and hybrid clouds as well." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
@cite_2 discuss the use of a microbenchmark in order to evaluate how different FaaS solutions behave in terms of performance and cost. For this purpose, the authors develop a microbenchmark in order to investigate the observable behavior with respect to the computer memory relation of different FaaS platforms, and the pricing models currently being in use.
{ "cite_N": [ "@cite_2" ], "mid": [ "2889111823" ], "abstract": [ "The Function as a Service (FaaS) subtype of serverless computing provides the means for abstracting away from servers on which developed software is meant to be executed. It essentially offers an event-driven and scalable environment in which billing is based on the invocation of functions and not on the provisioning of resources. This makes it very attractive for many classes of applications with bursty workload. However, the terms under which FaaS services are structured and offered to consumers uses mechanisms like GB–seconds (that is, X GigaBytes of memory used for Y seconds of execution) that differ from the usual models for compute resources in cloud computing. Aiming to clarify these terms, in this work we develop a microbenchmark that we use to evaluate the performance and cost model of popular FaaS solutions using well known algorithmic tasks. The results of this process show a field still very much under development, and justify the need for further extensive benchmarking of these services." ] }
1905.11707
2946013997
Cloud Service Providers deliver their products in form of ”as-a-Service”, which are typically categorized by the level of abstraction. This approach hides the implementation details and shows only functionality to the user. However, the problem is that it is hard to measure the performance of Cloud services, because they behave like black boxes. Especially with Function-as-a-Service it is even more difficult because it completely hides server and infrastructure management from users by design. Cloud Service Prodivers usually restrict the maximum size of code, memory and runtime of Cloud Functions. Nevertheless, users need clarification if more ressources are needed to deliver services in high quality. In this regard, we present the architectural design of a new Function-as-a-Service benchmarking tool, which allows users to evaluate the performance of Cloud Functions. Furthermore, the capabilities of the framework are tested on an isolated platform with a specific workload. The results show that users are able to get insights into Function-as-a-Service environments. This, in turn, allows users to identify factors which may slow down or speed up the performance of Cloud Functions.
Finally, the FaaS Benchmark Framework presented in this paper is built upon a previous work of @cite_4 , who present an initial investigation of benchmarking FaaS and outline the architectural design. @cite_4 present a two-tier architecture of a benchmarking framework where requests are invoked by a sender and processed by a Cloud Function on the CSP platform. This paper takes up the idea of previous research and introduces an additional third component that enables testers to evaluate the performance of the FaaS platform more precisely. Details about this third component and its improvements for benchmarking FaaS are discussed in Section III.
{ "cite_N": [ "@cite_4" ], "mid": [ "2604298291" ], "abstract": [ "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaS-based applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system." ] }
1905.11734
2946982613
In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of @math after @math from movement onset.
Our work contributes to the field of human motion prediction for manipulation robots in industrial facilities. Several works focused on reconstructing human motion trajectories during reaching tasks @cite_2 @cite_23 . Most of these systems require the knowledge of the end-point of the motion before predicting the whole trajectory. However, the reliable prediction of the target of a human motion from the early portion of the movement is still a challenging problem and the performance in reconstructing the trajectories strongly depends on the confidence of target prediction @cite_3 . Mainprice and Berenson @cite_16 proposed a manipulation planning framework to predict the motion target by means of a gmm for human-robot collaboration. Once the target has been identified, gmr is used to extract the best fitting motion. The algorithm has been trained with a library of motions built from real Kinect data. They achieved $92 , classification frameworks based on Probabilistic Flow Tubes (PFT) and Bayesian inference @cite_3 @cite_19 have been proposed to improve the speed of the prediction using human joints' angles from a Vicon motion capture system. However, reliable predictions required to process more than half of the trajectory.
{ "cite_N": [ "@cite_3", "@cite_19", "@cite_23", "@cite_2", "@cite_16" ], "mid": [ "1581800915", "2072084471", "1483019628", "2142846974", "1992461594" ], "abstract": [ "Interest in human-robot coexistence, in which humans and robots share a common work volume, is increasing in manufacturing environments. Efficient work coordination requires both awareness of the human pose and a plan of action for both human and robot agents in order to compute robot motion trajectories that synchronize naturally with human motion. In this paper, we present a data-driven approach that synthesizes anticipatory knowledge of both human motions and subsequent action steps in order to predict in real-time the intended target of a human performing a reaching motion. Motion-level anticipatory models are constructed using multiple demonstrations of human reaching motions. We produce a library of motions from human demonstrations, based on a statistical representation of the degrees of freedom of the human arm, using time series analysis, wherein each time step is encoded as a multivariate Gaussian distribution. We demonstrate the benefits of this approach through offline statistical analysis of human motion data. The results indicate a considerable improvement over prior techniques in early prediction, achieving 70 or higher correct classification on average for the first third of the trajectory (< 500msec). We also indicate proof-of-concept through the demonstration of a human-robot cooperative manipulation task performed with a PR2 robot. Finally, we analyze the quality of task-level anticipatory knowledge required to improve prediction performance early in the human motion trajectory.", "For robots to work effectively with humans, they must learn and recognize activities that humans perform. We enable a robot to learn a library of activities from user demonstrations and use it to recognize an action performed by an operator in real time. Our contributions are threefold: (1) a novel probabilistic flow tube representation that can intuitively capture a wide range of motions and can be used to support compliant execution; (2) a method to identify the relevant features of a motion, and ensure that the learned representation preserves these features in new and unforeseen situations; (3) a fast incremental algorithm for recognizing user-performed motions using this representation. Our approach provides several capabilities beyond those of existing algorithms. First, we leverage temporal information to model motions that may exhibit non-Markovian characteristics. Second, our approach can identify parameters of a motion not explicitly specified by the user. Third, we model hybrid continuous and discrete motions in a unified representation that avoids abstracting out the continuous details of the data. Experimental results show a 49 improvement over prior art in recognition rate for varying environments, and a 24 improvement for a static environment, while maintaining average computing times for incremental recognition of less than half of human reaction time. We also demonstrate motion learning and recognition capabilities on real-world robot platforms.", "An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses.", "Summary. A modelling approach for three-dimensional trajectories with particular application to hand reaching motions is described. Bezier curves are defined by control points which have a convenient geometrical interpretation. A fitting method for the control points to trajectory data is described. These fitted control points are then linked to covariates of interest by using a regression model. This allows the prediction of new trajectories and the ability to model the variability in trajectories. The methodology is illustrated with an application to hand trajectory modelling for ergonomics. Motion capture was used to collect a total of about 2000 hand trajectories performed by 20 subjects to a variety of targets. A simple model with strong predictive performance and interpretablility is developed.The use of hand trajectory models in the digital human models for virtual manufacturing applications is discussed.", "In this paper we present a framework that allows a human and a robot to perform simultaneous manipulation tasks safely in close proximity. The proposed framework is based on early prediction of the human's motion. The prediction system, which builds on previous work in the area of gesture recognition, generates a prediction of human workspace occupancy by computing the swept volume of learned human motion trajectories. The motion planner then plans robot trajectories that minimize a penetration cost in the human workspace occupancy while interleaving planning and execution. Multiple plans are computed in parallel, one for each robot task available at the current time, and the trajectory with the least cost is selected for execution. We test our framework in simulation using recorded human motions and a simulated PR2 robot. Our results show that our framework enables the robot to avoid the human while still accomplishing the robot's task, even in cases where the initial prediction of the human's motion is incorrect. We also show that taking into account the predicted human workspace occupancy in the robot's motion planner leads to safer and more efficient interactions between the user and the robot than only considering the human's current configuration." ] }
1905.11734
2946982613
In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of @math after @math from movement onset.
The aforementioned approaches propose to observe and track the body by means of visual systems, for example by exploiting 3D camera networks, or markers attached to the body. This solution has the drawback of being sensitive to camera occlusions, light variations, and motion blur @cite_20 . imu are probably the main alternative to cameras, they are effectively used to learn new behaviors @cite_27 and control robots in industrial setups @cite_22 . In many cases, a multi-modal approach can be used to enrich the information and overcome limitations of uni-modal systems @cite_7 . Many solutions propose the introduction of physiological signals, recorded directly from the human body. emg signals have been rarely considered as a unique tool for motion prediction @cite_9 due to their non-stationarity and sensitivity to muscular fatigue and stress @cite_18 . Nevertheless, emg are quite popular for controlling exoskeletons or prosthesis @cite_10 and they have proved to be a valuable source of information in cooperative tasks when used in conjunction with other measurement units @cite_28 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_7", "@cite_28", "@cite_9", "@cite_27", "@cite_10", "@cite_20" ], "mid": [ "2488164446", "2530272142", "", "2564917287", "2101027380", "2566429685", "2497488250", "2165744315" ], "abstract": [ "This lecture explores the various uses of surface electromyography in the field of biomechanics. Three groups of applications are considered: those involving the activation timing of muscles, the force EMG signal relationship, and the use of the EMG signal as a fatigue index. Technical considerations for recording the EMG signal with maximal fidelity are reviewed, and a compendium of all known factors that affect the information contained in the EMG signal is presented. Questions are posed to guide the practitioner in the proper use of surface electromyography. Sixteen recommendations are made regarding the proper detection, analysis, and interpretation of the EMG signal and measured force. Sixteen outstanding problems that present the greatest challenges to the advancement of surface electromyography are put forward for consideration. Finally, a plea is made for arriving at an international agreement on procedures commonly used in electromyography and biomechanics.", "In this paper, we have developed a neural network (NN) control enhanced teleoperation strategy which has been implemented on the Baxter robot. The upper limb motion of the human operator is captured by the inertial measurement unit (IMU) embedded in a pair of MYO armbands which are worn on the operator's forearm and upper arm, respectively. They are used to detect and to reconstruct the physical motion of shoulder and elbow joints of the operator. Given human operator's motion as reference trajectories, the robot is controlled using NN technique to compensate for its unknown dynamics. Adaptive law has been synthesized based on Lyapunov theory to enable effective NN learning. Preliminary experiments have been carried out to test the proposed method, which results in satisfactory performance on the Baxter robot teleoperation.", "", "This paper presents a novel approach for human-robot cooperation in tasks with dynamic uncertainties. The essential element of the proposed method is a multi-modal interface that provides the robot with the feedback about the human motor behaviour in real-time. The human muscle activity measurements and the arm force manipulability properties encode the information about the motion and impedance, and the intended configuration of the task frame, respectively. Through this human-in-the-loop framework, the developed hybrid controller of the robot can adapt its actions to provide the desired motion and impedance regulation in different phases of the cooperative task. We experimentally evaluate the proposed approach in a two-person sawing task that requires an appropriate complementary behaviour from the two agents.", "In this paper we describe and practically demonstrate a robotic arm hand system that is controlled in realtime in 6D Cartesian space through measured human muscular activity. The soft-robotics control architecture of the robotic system ensures safe physical human robot interaction as well as stable behaviour while operating in an unstructured environment. Muscular control is realised via surface electromyography, a non-invasive and simple way to gather human muscular activity from the skin. A standard supervised machine learning system is used to create a map from muscle activity to hand position, orientation and grasping force which then can be evaluated in real time—the existence of such a map is guaranteed by gravity compensation and low-speed movement. No kinematic or dynamic model of the human arm is necessary, which makes the system quickly adaptable to anyone. Numerical validation shows that the system achieves good movement precision. Live evaluation and demonstration of the system during a robotic trade fair is reported and confirms the validity of the approach, which has potential applications in muscle-disorder rehabilitation or in teleoperation where a close-range, safe master slave interaction is required, and or when optical magnetic position tracking cannot be enforced.", "All over the world, people find joy and amusement in playing hand-clapping games such as “Pat-a-cake” and “Slide.” Thus, as robots enter everyday human spaces and work together with people, we see potential for them to entertain, engage, and assist humans through cooperative clapping games. This paper explores how data recorded from a pair of commonly available inertial measurement units (IMUs) worn on a human's hands can contribute to the teaching of a hand-clapping robot. We identified representative hand-clapping activities, considered approaches to classify games, and conducted a study to record hand-clapping motion data. Analysis of data from fifteen participants indicates that support vector machines and Markov chain analysis can correctly classify 95.5 of the demonstrated hand-clapping motions (from ten discrete actions) and 92.3 of the hand-clapping game demonstrations recorded in the study. These results were calculated by withholding a participant's entire dataset for testing, so these results should represent general system behavior for new users. Overall, this research lays the groundwork for a simple and efficient method that people could use to demonstrate hand-clapping games to robots.", "The interaction with robotic devices by means of physiological human signals has become of great interest in the last years because of the capability of catching human intention of movement and translate it in a coherent action performed by a robotic platform. Due to the complexity of EMG signals, several studies have been carried out about models built on a single subject (subject-specific). However, the execution of a certain task presents a common underlying behaviour, even if it is performed by different people. This common behaviour leads to some constraints that could be extracted by looking to different interpretations of the task, obtaining a subject-independent model. The few attempts in literature showed the possibility of creating a multiuser interface able to adapt to novel users (subject-independent). Nevertheless, the majority of the studies focused on classification problems, that are only able to determine the type of movement. We improved the state-of-the-art by introducing an online subject-independent framework able to compute the actual trajectory of the robot motion through a regression technique. The framework is based on a Gaussian Mixture Model (GMM) trained through Surface Electromyography (sEMG) signals coming from human subjects. Wavelet Transform has been used to elaborate the sEMG signals in real time. The goodness of the proposed framework has been tested with two different dataset involving various joints for both upper and lower limbs. The achieved results show that our framework could obtain high performances in both accuracy and computational time by reaching significant correlation (≥ 0.8). The whole procedure has been tested on two robots, a simulated hand and a humanoid, by remapping the human motion to the robotic platforms in order to verify the proper execution of the original movement.", "AbstractBiomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was l..." ] }
1905.11734
2946982613
In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of @math after @math from movement onset.
In our work, the system exploits the information registered through a pair of Myo armbands, by Thalmic Labs, that enables simultaneous acquisition of kinematic and muscle activity information, at a very affordable cost. This device has already shown good acceptability in healthcare environments @cite_25 . However, few attempts have been made to introduce it in industrial environments @cite_22 . In the following, we will take into consideration the Myo multi-modal interface in combination with confidence-based criteria for dynamic stopping for early predictions of human motion in cooperative industrial tasks, while measuring feasibility and efficiency of the proposed methods.
{ "cite_N": [ "@cite_22", "@cite_25" ], "mid": [ "2530272142", "2785342388" ], "abstract": [ "In this paper, we have developed a neural network (NN) control enhanced teleoperation strategy which has been implemented on the Baxter robot. The upper limb motion of the human operator is captured by the inertial measurement unit (IMU) embedded in a pair of MYO armbands which are worn on the operator's forearm and upper arm, respectively. They are used to detect and to reconstruct the physical motion of shoulder and elbow joints of the operator. Given human operator's motion as reference trajectories, the robot is controlled using NN technique to compensate for its unknown dynamics. Adaptive law has been synthesized based on Lyapunov theory to enable effective NN learning. Preliminary experiments have been carried out to test the proposed method, which results in satisfactory performance on the Baxter robot teleoperation.", "Abstract This paper proposes the use of a non-immersive virtual reality rehabilitation system \"ReHabgame\" developed using Microsoft Kinect™ and the Thalmic™ Labs Myo gesture control armband. The ReHabgame was developed based on two third-person video games that provide a feasible possibility of assessing postural control and functional reach tests. It accurately quantifies specific postural control mechanisms including timed standing balance, functional reach tests using real-time anatomical landmark orientation, joint velocity, and acceleration while end trajectories were calculated using an inverse kinematics algorithm. The game was designed to help patients with neurological impairment to be subjected to physiotherapy activity and practice postures of daily activities. The subjective experience of the ReHabgame was studied through the development of an Engagement Questionnaire (EQ) for qualitative, quantitative and Rasch model. The Monte-Carlo Tree Search (MCTS) and Random object (ROG) generator algorithms were used to adapt the physical and gameplay intensity in the ReHabgame based on the Motor Assessment Scale (MAS) and Hierarchical Scoring System (HSS). Rasch analysis was conducted to assess the psychometric characteristics of the ReHabgame and to identify if these are any misfitting items in the game. Rasch rating scale model (RSM) was used to assess the engagement of players in the ReHabgame and evaluate the effectiveness and attractiveness of the game. The results showed that the scales assessing the rehabilitation process met Rasch expectations of reliability, and unidimensionality. Infit and outfit mean squares values are in the range of (0.68–1.52) for all considered 16 items. The Root Mean Square Residual (RMSR) and the person separation reliability were acceptable. The item person map showed that the persons and items were clustered symmetrically." ] }
1905.11775
2947882452
In this study, importance of user inputs is studied in the context of personalizing human activity recognition models using incremental learning. Inertial sensor data from three body positions are used, and the classification is based on Learn++ ensemble method. Three different approaches to update models are compared: non-supervised, semi-supervised and supervised. Non-supervised approach relies fully on predicted labels, supervised fully on user labeled data, and the proposed method for semi-supervised learning, is a combination of these two. In fact, our experiments show that by relying on predicted labels with high confidence, and asking the user to label only uncertain observations (from 12 to 26 of the observations depending on the used base classifier), almost as low error rates can be achieved as by using supervised approach. In fact, the difference was less than 2 -units. Moreover, unlike non-supervised approach, semi-supervised approach does not suffer from drastic concept drift, and thus, the error rate of the non-supervised approach is over 5 -units higher than using semi-supervised approach.
This study focuses on human activity recognition based on inertial sensor data collected using smartphone sensors. One of the main challenges of the field is that people are different: they are unique for instance in terms of physical characteristics, health state or gender. Due to this, it is shown that a model that provides accurate results for one person, does not necessarily work accurately with somebody else's data. For instance, user-independent models are not accurate if they are trained with healthy study subjects and tested with subjects who have difficulties to move @cite_8 . Personal recognition models provide better recognition rates, but the challenge is that they normally require personal training data, and therefore, a personal data gathering session @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_8" ], "mid": [ "2023302299", "2154600790" ], "abstract": [ "The last 20 years have seen ever-increasing research activity in the field of human activity recognition. With activity recognition having considerably matured, so has the number of challenges in designing, implementing, and evaluating activity recognition systems. This tutorial aims to provide a comprehensive hands-on introduction for newcomers to the field of human activity recognition. It specifically focuses on activity recognition using on-body inertial sensors. We first discuss the key research challenges that human activity recognition shares with general pattern recognition and identify those challenges that are specific to human activity recognition. We then describe the concept of an Activity Recognition Chain (ARC) as a general-purpose framework for designing and evaluating activity recognition systems. We detail each component of the framework, provide references to related research, and introduce the best practice methods developed by the activity recognition research community. We conclude with the educational example problem of recognizing different hand gestures from inertial sensors attached to the upper and lower arm. We illustrate how each component of this framework can be implemented for this specific activity recognition problem and demonstrate how different implementations compare and how they impact overall recognition performance.", "Mobile phones with built-in accelerometers promise a convenient, objective way to quantify everyday movements and classify those movements into activities. Using accelerometer data we estimate the activities of 18 healthy subjects and 8 patients with Parkinson’s disease – e.g. walking, standing, sitting, or simply holding the phone. We use standard machine learning classifiers (support vector machines, regularized logistic regression) to automatically select, weigh, and combine a large set of standard features for time series analysis. Using crossvalidation across all samples we are able to correctly identify 96.1 of the activities of healthy subjects and 92.2 of the activities of Parkinson’s patients. However, when applying the classification parameters derived from the set of healthy subjects to Parkinson’s patients, the percent correct lowers to 60.3 , due to different characteristics of movement. For a fairer comparison across populations we also applied subject-wise crossvalidation, identifying healthy subject activities with 86.0 accuracy and 75.1 accuracy for patients. We discuss the key differences between these populations, and why algorithms designed for and trained with healthy subject data are not reliable for activity recognition in populations with motor disabilities." ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
: has the purpose of breaking the text into tokens to be processed individually or as a sequence. These tokens are usually words but can also be phrases, numbers and other types of elements. The most straightforward form of tokenization is breaking the input text by the whitespaces or punctuation. However, with scientific biomedical literature, that is usually descriptive and formal, we have to account for complex entities like human phenotype terms (composed of multiple words), genes (represented by symbols), and other types of structured entities. These entities tend to be morphological complex and need specialized tokenization pipelines. Some researchers use a compression algorithm @cite_46 , byte pair encoding (BPE), to account for biomedical vocabulary variability. BPE represents open vocabularies through a fixed-size vocabulary of variable-length character sequences, making it suitable for neural networks models.
{ "cite_N": [ "@cite_46" ], "mid": [ "1816313093" ], "abstract": [ "Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively." ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
: aims at reducing the variability of natural language by normalizing a token to its base form (stem) @cite_23 . It can also take into account the context of the token, along with vocabulary and morphological analysis to determine the canonical form of the word (lemma). The stem can correspond only to a fragment of a word, but the lemma is always a real word. For instance, the stem of the word is and the lemma is .
{ "cite_N": [ "@cite_23" ], "mid": [ "1532325895" ], "abstract": [ "Class-tested and coherent, this groundbreaking new textbook teaches web-era information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. Written from a computer science perspective by three leading experts in the field, it gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Although originally designed as the primary text for a graduate or advanced undergraduate course in information retrieval, the book will also create a buzz for researchers and professionals alike." ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
Text mining has become a widespread approach to identify and extract information from unstructured or highly heterogeneous text @cite_27 . Text mining is used to extract facts and relationships in a structured form that can be used to annotate specialized databases and to transfer knowledge between domains @cite_0 . We may consider text mining as a sub-field of data mining. Thus, data mining algorithms can be applied if we transform text to a proper data representation, namely numeric vectors. Even if in recent years text mining tools have evolved considerably in number and quality, there are still many challenges in applying text mining to scientific biomedical literature. The main challenges are the complexity and heterogeneity of the written resources, which make the retrieval of relevant information, i.e., relations between entities, non a trivial task. Text Mining tools can target different tasks together or separately. Some of the primary tasks are Named Entity Recognition (NER), Named-Entity Linking (NEL) and Relation Extraction (RE).
{ "cite_N": [ "@cite_0", "@cite_27" ], "mid": [ "1968761064", "2786693834" ], "abstract": [ "In recent years the amount of experimental data that is produced in biomedical research and the number of papers that are being published in this field have grown rapidly. In order to keep up to date with developments in their field of interest and to interpret the outcome of experiments in light of all available literature, researchers turn more and more to the use of automated literature mining. As a consequence, text mining tools have evolved considerably in number and quality and nowadays can be used to address a variety of research questions ranging from de novo drug target discovery to enhanced biological interpretation of the results from high throughput experiments. In this paper we introduce the most important techniques that are used for a text mining and give an overview of the text mining tools that are currently being used and the type of problems they are typically applied for.", "Across academia and industry, text mining has become a popular strategy for keeping up with the rapid growth of the scientific literature. Text mining of the scientific literature has mostly been carried out on collections of abstracts, due to their availability. Here we present an analysis of 15 million English scientific full-text articles published during the period 1823–2016. We describe the development in article length and publication sub-topics during these nearly 250 years. We showcase the potential of text mining by extracting published protein–protein, disease–gene, and protein subcellular associations using a named entity recognition system, and quantitatively report on their accuracy using gold standard benchmark data sets. We subsequently compare the findings to corresponding results obtained on 16.5 million abstracts included in MEDLINE and show that text mining of full-text articles consistently outperforms using abstracts only." ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
: identifies relations between entities (recognized manually or by NER) in a text. Tools mainly consider relations by the co-occurrence of the entities in the same sentence, but some progress is being made to extend this task to the full document (taking into account a global context) @cite_43 .
{ "cite_N": [ "@cite_43" ], "mid": [ "2559034752" ], "abstract": [ "The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient’s genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed). Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer’s disease, hemochromatosis, age-related macular degeneration (AMD), diabetes mellitus, and cystic fibrosis). We then evaluate our approach in two ways: (1) a direct comparison with the state of the art using benchmark datasets; (2) a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt) for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28 improvement in F1-measure (from 0.62 to 0.79) over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB), we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets (disease-gene-variant) that overlapped with entries in UniProt and 5,384 triplets without overlap in UniProt. Analysis of the overlapping triplets and of a stratified sample of the non-overlapping triplets revealed accuracies of 93 and 80 for the respective categories (cumulative accuracy, 77 ). We conclude that our process represents an important and broadly applicable improvement to the state of the art for curation of disease-gene-variant relationships." ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
Through the years, several approaches have been proposed to extract relations from biomedical literature @cite_12 . Most of these approaches work on a sentence level to perform RE, due to the inherent complexity of biomedical literature.
{ "cite_N": [ "@cite_12" ], "mid": [ "2594818524" ], "abstract": [ "Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https: github.com AndreLamurias IBRel." ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
: assumes that if two entities are mentioned in the same sentence (co-occur), it is likely that they are related. Usually, the application of this approach results in a higher recall (most of the entities co-occurring in a sentence participate in a relation), and lower precision. Some methods use frequency-based scoring schemes to eliminate relations identified by chance @cite_47 . Nowadays, most applications use co-occurrence as a baseline against more complex approaches @cite_45 .
{ "cite_N": [ "@cite_47", "@cite_45" ], "mid": [ "2099369363", "2611774767" ], "abstract": [ "It is now almost 15 years since the publication of the first paper on text mining in the genomics domain, and decades since the first paper on text mining in the medical domain. Enormous progress has been made in the areas of information retrieval, evaluation methodologies and resource construction. Some problems, such as abbreviation-handling, can essentially be considered solved problems, and others, such as identification of gene mentions in text, seem likely to be solved soon. However, a number of problems at the frontiers of biomedical text mining continue to present interesting challenges and opportunities for great improvements and interesting research. In this article we review the current state of the art in biomedical text mining or ‘BioNLP’ in general, focusing primarily on papers published within the past year.", "PCT No. PCT NL83 00020 Sec. 371 Date Jan. 17, 1984 Sec. 102(e) Date Jan. 17, 1984 PCT Filed May 19, 1983 PCT Pub. No. WO83 04106 PCT Pub. Date Nov. 24, 1983.Method and apparatus for the measurement of fluid flow velocities for instance in a slurry transport conduit, in which pulse signals are sent through the fluid by pairs of transmitting and receiving transducers, the pulses having such time separation that standing waves caused by reflection fade out within the time separation, and using side band techniques and correlation techniques." ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
: also uses manually defined and automatically generated rules from the training data to extract relations. Depending on the systems, the differences between pattern-based and ruled-based approaches can be minor. Ruled-based approaches not only use patterns but also, additional restraints to cover issues that are difficult to express by patterns, such as checking for the negation of the relations @cite_8 . Some ruled-based systems distance themselves from pattern-based approaches by replacing regular expressions with heuristic algorithms and sets of procedures @cite_14 . Similarly to pattern-based, ruled-based approaches tend to have poor recall, even though rules tend to be more flexible. The trade-off recall precision can be improved using automatic methods for rule creation @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_8" ], "mid": [ "2126975755", "2123112337", "2167289895" ], "abstract": [ "Objective A system that translates narrative text in the medical domain into structured representation is in great demand. The system performs three sub-tasks: concept extraction, assertion classification, and relation identification. @PARASPLIT Design The overall system consists of five steps: (1) pre-processing sentences, (2) marking noun phrases (NPs) and adjective phrases (APs), (3) extracting concepts that use a dosage-unit dictionary to dynamically switch two models based on Conditional Random Fields (CRF), (4) classifying assertions based on voting of five classifiers, and (5) identifying relations using normalized sentences with a set of effective discriminating features. @PARASPLIT Measurements Macro-averaged and micro-averaged precision, recall and F-measure were used to evaluate results. @PARASPLIT Results The performance is competitive with the state-of-the-art systems with micro-averaged F-measure of 0.8489 for concept extraction, 0.9392 for assertion classification and 0.7326 for relation identification. @PARASPLIT Conclusions The system exploits an array of common features and achieves state-of-the-art performance. Prudent feature engineering sets the foundation of our systems. In concept extraction, we demonstrated that switching models, one of which is especially designed for telegraphic sentences, improved extraction of the treatment concept significantly. In assertion classification, a set of features derived from a rule-based classifier were proven to be effective for the classes such as conditional and possible. These classes would suffer from data scarcity in conventional machine-learning methods. In relation identification, we use two-staged architecture, the second of which applies pairwise classifiers to possible candidate classes. This architecture significantly improves performance.", "Objective: The amount of new discoveries (as published in the scientific literature) in the biomedical area is growing at an exponential rate. This growth makes it very difficult to filter the most relevant results, and thus the extraction of the core information becomes very expensive. Therefore, there is a growing interest in text processing approaches that can deliver selected information from scientific publications, which can limit the amount of human intervention normally needed to gather those results. Materials and methods: This paper presents and evaluates an approach aimed at automating the process of extracting functional relations (e.g. interactions between genes and proteins) from scientific literature in the biomedical domain. The approach, using a novel dependency-based parser, is based on a complete syntactic analysis of the corpus. Results: We have implemented a state-of-the-art text mining system for biomedical literature, based on a deep-linguistic, full-parsing approach. The results are validated on two different corpora: the manually annotated genomics information access (GENIA) corpus and the automatically annotated arabidopsis thaliana circadian rhythms (ATCR) corpus. Conclusion: We show how a deep-linguistic approach (contrary to common belief) can be used in a real world text mining application, offering high-precision relation extraction, while at the same time retaining a sufficient recall.", "Motivation: With the rapid advancement of biomedical science and the development of high-throughput analysis methods, the extraction of various types of information from biomedical text has become critical. Since automatic functional annotations of genes are quite useful for interpreting large amounts of high-throughput data efficiently, the demand for automatic extraction of information related to gene functions from text has been increasing. Results: We have developed a method for automatically extracting the biological process functions of genes protein families based on Gene Ontology (GO) from text using a shallow parser and sentence structure analysis techniques. When the gene protein family names and their functions are described in ACTOR (doer of action) and OBJECT (receiver of action) relationships, the corresponding GO-IDs are assigned to the genes proteins families. The gene protein family names are recognized using the gene protein family name dictionaries developed by our group. To achieve wide recognition of the gene protein family functions, we semi-automatically gather functional terms based on GO using co-occurrence, collocation similarities and rule-based techniques. A preliminary experiment demonstrated that our method has an estimated recall of 54--64 with a precision of 91--94 for actually described functions in abstracts. When applied to the PUBMED, it extracted over 190 000 gene--GO relationships and 150 000 family--GO relationships for major eukaryotes. Availability: The extracted gene functions are available at http: prime.ontology.ims.u-tokyo.ac.jp Contact: [email protected]" ] }
1905.11391
2948011695
Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.
: usually makes use of large annotated biomedical corpora (supervised learning) to perform RE. These corpora are pre-processed using NLP tools and then used to train classification models. Beyond Neural Networks, described in detail in Section 3 3 , it is possible to categorize ML methods into two main approaches, Feature-based and Kernel-based. represent each instance (e.g., sentence) as a vector in an n-dimensional space. Support Vector Machines (SVM) classifiers tend to be used to solve problems of binary classification, and are considered because there is no interference of the user in the classification process. These classifiers can use different features that are meant to represent the data characteristics (e.g., shortest path, bag-of-words (BOW), and POS tagging) @cite_13 . main idea is to quantify the similarity between the different instances in a data-set by computing the similarities of their representations @cite_48 . Kernel-based approaches add the structural representation of instances (e.g., by using parse trees). These methods can use one kernel or a combination of kernels (e.g., graph, sub-tree (ST), and shallow linguistic (SL)).
{ "cite_N": [ "@cite_48", "@cite_13" ], "mid": [ "1493270114", "2082683263" ], "abstract": [ "We propose an approach for extracting relations between entities from biomedical literature based solely on shallow linguistic information. We use a combination of kernel functions to integrate two different information sources: (i) the whole sentence where the relation appears, and (ii) the local contexts around the interacting entities. We performed experiments on extracting gene and protein interactions from two different data sets. The results show that our approach outperforms most of the previous methods based on syntactic and semantic information.", "Interactions between proteins and genes are considered essential in the description of biomolecular phenomena, and networks of interactions are applied in a system's biology approach. Recently, many studies have sought to extract information from biomolecular text using natural language processing technology. Previous studies have asserted that linguistic information is useful for improving the detection of gene interactions. In particular, syntactic relations among linguistic information are good for detecting gene interactions. However, previous systems give a reasonably good precision but poor recall. To improve recall without sacrificing precision, this paper proposes a three-phase method for detecting gene interactions based on syntactic relations. In the first phase, we retrieve syntactic encapsulation categories for each candidate agent and target. In the second phase, we construct a verb list that indicates the nature of the interaction between pairs of genes. In the last phase, we determine direction rules to detect which of two genes is the agent or target. Even without biomolecular knowledge, our method performs reasonably well using a small training dataset. While the first phase contributes to improve recall, the second and third phases contribute to improve precision. In the experimental results using ICML 05 Workshop on Learning Language in Logic (LLL05) data, our proposed method gave an F-measure of 67.2 for the test data, significantly outperforming previous methods. We also describe the contribution of each phase to the performance." ] }
1905.11381
2947792938
Deep-learning based classification algorithms have been shown to be susceptible to adversarial attacks: minor changes to the input of classifiers can dramatically change their outputs, while being imperceptible to humans. In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. Drawing on ideas from information and coding theory, we propose a general class of defenses for detecting classifier errors caused by abnormally small input perturbations. We further show theoretical guarantees for the performance of this detection method. We present experimental results with (a) a voice recognition system, and (b) a digit recognition system using the MNIST database, to demonstrate the effectiveness of the proposed defense methods. The ideas in this paper are motivated by a simple analogy between AI classifiers and the standard Shannon model of a communication system.
Ever since pointed out the vulnerability of deep learning models in @cite_18 , the community has witnessed a large volume of works on this topic, from the angle of either attackers or defenders. From the attackers' side, various types of attacking methods have been proposed in different scenarios, ranging from white-box attack where the attackers know everything about the deep learning system such as system structure and weights, hyperparameters and training data @cite_18 @cite_14 @cite_47 @cite_30 @cite_17 @cite_19 @cite_32 @cite_49 to black-box attacks where the attackers know nothing about the system structure or parameters but only access to the queries of the system @cite_35 @cite_39 @cite_49 @cite_33 @cite_16 @cite_54 . Although the existence of adversarial samples was originally pointed out in image classification tasks, the attacking methods for generating adversarial samples have been applied to various applications such as text classification @cite_53 , object detection @cite_50 @cite_42 , speech recognition @cite_11 , and autonomous driving @cite_13 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_14", "@cite_11", "@cite_33", "@cite_54", "@cite_53", "@cite_42", "@cite_32", "@cite_39", "@cite_19", "@cite_49", "@cite_50", "@cite_47", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2460937040", "2603766943", "1673923490", "1945616565", "2783113218", "2939328166", "2949358371", "2902901209", "2906586812", "2963857521", "2906208681", "2180612164", "2902364018", "2950774971", "2179402106", "2950782995", "2921029156", "2950159395" ], "abstract": [ "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9 similar, but transcribes as any phrase we choose (at a rate of up to 50 characters per second). We apply our iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100 success rate. The feasibility of this attack introduce a new domain to study adversarial examples.", "Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometries of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.", "Solving for adversarial examples with projected gradient descent has been demonstrated to be highly effective in fooling the neural network based classifiers. However, in the black-box setting, the attacker is limited only to the query access to the network and solving for a successful adversarial example becomes much more difficult. To this end, recent methods aim at estimating the true gradient signal based on the input queries but at the cost of excessive queries. We propose an efficient discrete surrogate to the optimization problem which does not require estimating the gradient and consequently becomes free of the first order update hyperparameters to tune. Our experiments on Cifar-10 and ImageNet show the state of the art black-box attack performance with significant reduction in the required queries compared to a number of recently proposed methods. The source code is available at this https URL.", "Adversarial examples are carefully constructed modifications to an input that completely change the output of a classifier but are imperceptible to humans. Despite these successful attacks for continuous data (such as image and audio samples), generating adversarial examples for discrete structures such as text has proven significantly more challenging. In this paper we formulate the attacks with discrete input on a set function as an optimization task. We prove that this set function is submodular for some popular neural network text classifiers under simplifying assumption. This finding guarantees a @math approximation factor for attacks that use the greedy algorithm. Meanwhile, we show how to use the gradient of the attacked classifier to guide the greedy search. Empirical studies with our proposed optimization scheme show significantly improved attack ability and efficiency, on three different text classification tasks over various baselines. We also use a joint sentence and word paraphrasing technique to maintain the original semantics and syntax of the text. This is validated by a human subject evaluation in subjective metrics on the quality and semantic coherence of our generated adversarial text.", "In this paper, we proposed the first practical adversarial attacks against object detectors in realistic situations: the adversarial examples are placed in different angles and distances, especially in the long distance (over 20m) and wide angles 120 degree. To improve the robustness of adversarial examples, we proposed the nested adversarial examples and introduced the image transformation techniques. Transformation methods aim to simulate the variance factors such as distances, angles, illuminations, etc., in the physical world. Two kinds of attacks were implemented on YOLO V3, a state-of-the-art real-time object detector: hiding attack that fools the detector unable to recognize the object, and appearing attack that fools the detector to recognize the non-existent object. The adversarial examples are evaluated in three environments: indoor lab, outdoor environment, and the real road, and demonstrated to achieve the success rate up to 92.4 based on the distance range from 1m to 25m. In particular, the real road testing of hiding attack on a straight road and a crossing road produced the success rate of 75 and 64 respectively, and the appearing attack obtained the success rates of 63 and 81 respectively, which we believe, should catch the attention of the autonomous driving community.", "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95 to 0.5 .In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100 probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "We consider adversarial examples in the black-box decision-based scenario. Here, an attacker has access to the final classification of a model, but not its parameters or softmax outputs. Most attacks for this scenario are based either on transferability, which is unreliable, or random sampling, which is often slow. Focusing on the latter, we propose to improve the efficiency of sampling-based attacks with prior beliefs about the target domain. We identify two such priors, image frequency and surrogate gradients, and discuss how to integrate them into a unified sampling procedure. We then formulate the Biased Boundary Attack, which achieves a drastic speedup over the original Boundary Attack. We demonstrate the effectiveness of our approach against an ImageNet classifier. We also showcase a targeted attack for the Google Cloud Vision API, where we craft convincing examples with just a few hundred queries. Finally, we demonstrate that our approach outperforms the state of the art when facing strong defenses: Our attack scored second place in the targeted attack track of the NeurIPS 2018 Adversarial Vision Challenge.", "Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97 adversarial success rate while only modifying on average 4.02 of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.", "Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an @math convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.", "It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, generally exist for deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the basic target is a pixel or a receptive field in segmentation, and an object proposal in detection), which inspires us to optimize a loss function over a set of pixels proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.", "We show that the representation of an image in a deep neural network (DNN) can be manipulated to mimic those of other natural images, with only minor, imperceptible perturbations to the original image. Previous methods for generating adversarial images focused on image perturbations designed to produce erroneous class labels, while we concentrate on the internal layers of DNN representations. In this way our new class of adversarial images differs qualitatively from others. While the adversary is perceptually similar to one image, its internal representation appears remarkably similar to a different image, one from a different class, bearing little if any apparent similarity to the input; they appear generic and consistent with the space of natural images. This phenomenon raises questions about DNN representations, as well as the properties of natural images themselves.", "We propose an intriguingly simple method for the construction of adversarial images in the black-box setting. In constrast to the white-box scenario, constructing black-box adversarial images has the additional constraint on query budget, and efficient attacks remain an open problem to date. With only the mild assumption of continuous-valued confidence scores, our highly query-efficient algorithm utilizes the following simple iterative principle: we randomly sample a vector from a predefined orthonormal basis and either add or subtract it to the target image. Despite its simplicity, the proposed method can be used for both untargeted and targeted attacks -- resulting in previously unprecedented query efficiency in both settings. We demonstrate the efficacy and efficiency of our algorithm on several real world settings including the Google Cloud Vision API. We argue that our proposed algorithm should serve as a strong baseline for future black-box attacks, in particular because it is extremely fast and its implementation requires less than 20 lines of PyTorch code.", "Recent advances in machine learning, especially techniques such as deep neural networks, are promoting a range of high-stakes applications, including autonomous driving, which often relies on deep learning for perception. While deep learning for perception has been shown to be vulnerable to a host of subtle adversarial manipulations of images, end-to-end demonstrations of successful attacks, which manipulate the physical environment and result in physical consequences, are scarce. Moreover, attacks typically involve carefully constructed adversarial examples at the level of pixels. We demonstrate the first end-to-end attacks on autonomous driving in simulation, using simple physically realizable attacks: the painting of black lines on the road. These attacks target deep neural network models for end-to-end autonomous driving control. A systematic investigation shows that such attacks are surprisingly easy to engineer, and we describe scenarios (e.g., right turns) in which they are highly effective, and others that are less vulnerable (e.g., driving straight). Further, we use network deconvolution to demonstrate that the attacks succeed by inducing activation patterns similar to entirely different scenarios used in training.", "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust." ] }
1905.11381
2947792938
Deep-learning based classification algorithms have been shown to be susceptible to adversarial attacks: minor changes to the input of classifiers can dramatically change their outputs, while being imperceptible to humans. In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations. Drawing on ideas from information and coding theory, we propose a general class of defenses for detecting classifier errors caused by abnormally small input perturbations. We further show theoretical guarantees for the performance of this detection method. We present experimental results with (a) a voice recognition system, and (b) a digit recognition system using the MNIST database, to demonstrate the effectiveness of the proposed defense methods. The ideas in this paper are motivated by a simple analogy between AI classifiers and the standard Shannon model of a communication system.
In this paper, based on the feature compression properties of deep learning systems, we propose a new rigorous theoretical understanding of the adversarial phenomena. Our explanation is distinct from previous work. Compared with @cite_18 @cite_14 which are empirical, our results are more rigorous. The results in @cite_7 are applicable for linear classifiers, while our explanation holds for both linear and nonlinear classifiers. In @cite_20 , the authors exploited the curvature condition of the decision boundary of the classifiers, while we only utilize the fact that the classifiers will always compress high dimensional inputs to low dimensional latent codes before they make any decisions. Our results are also different from @cite_3 where they required the inputs to satisfy a sparse representation model, while we do not need this assumption. Our theoretical explanation applies to both targeted and untargeted attacks, and is based on an very intuitive and ubiquitous assumption, i.e., feature compression property.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_3", "@cite_20" ], "mid": [ "1673923490", "1945616565", "2513314332", "", "2963467071" ], "abstract": [ "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being \"too linear\" (, 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing, linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation.", "", "Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a semi-random noise regime that generalizes both the random and worst-case noise regimes. We propose the first quantitative analysis of the robustness of nonlinear classifiers in this general noise regime. We establish precise theoretical bounds on the robustness of classifiers in this general regime, which depend on the curvature of the classifier's decision boundary. Our bounds confirm and quantify the empirical observations that classifiers satisfying curvature constraints are robust to random noise. Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes. We perform experiments and show that the derived bounds provide very accurate estimates when applied to various state-of-the-art deep neural networks and datasets. This result suggests bounds on the curvature of the classifiers' decision boundaries that we support experimentally, and more generally offers important insights onto the geometry of high dimensional classification problems." ] }
1905.11353
2947080931
How to optimally dispatch orders to vehicles and how to trade off between immediate and future returns are fundamental questions for a typical ride-hailing platform. We model ride-hailing as a large-scale parallel ranking problem and study the joint decision-making the task of order dispatching and fleet management in online ride-hailing platforms. This task brings unique challenges in the four aspects. First, to facilitate a huge number of vehicles to act and learn efficiently and robustly, we treat each region cell as an agent and build a multi-agent reinforcement learning framework. Second, to coordinate the agents to achieve long-term benefits, we leverage the geographical hierarchy of the region grids to perform hierarchical reinforcement learning. Third, to deal with the heterogeneous and variant action space for joint order dispatching and fleet management, we design the action as the ranking weight vector to rank and select the specific order or the fleet management destination in a unified formulation. Fourth, to achieve the multi-scale ride-hailing platform, we conduct the decision-making process in a hierarchical way where multi-head attention mechanism is utilized to incorporate the impacts of neighbor agents and capture the key agent in each scale. The whole novel framework is named as CoRide. Extensive experiments based on multiple cities real-world data as well as analytic synthetic data demonstrate that CoRide provides superior performance in terms of platform revenue and user experience in the task of city-wide hybrid order dispatching and fleet management over strong baselines. This work provides not only a solution for current online ride-hailing platforms, but also an advanced artificial intelligent technique for future life especially when large scale unmanned ground vehicles going into service.
Hierarchical Reinforcement Learning Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve tasks with long-term dependency or multi-level interaction patterns @cite_32 @cite_6 . Recent works have suggested that several interesting and standout results can be induced by training multi-level hierarchical policy in a multi-task setup @cite_28 @cite_10 or implementing hierarchical setting in sparse reward problems @cite_43 @cite_41 .
{ "cite_N": [ "@cite_28", "@cite_41", "@cite_32", "@cite_6", "@cite_43", "@cite_10" ], "mid": [ "2765602917", "2949267040", "", "2121517924", "2788781499", "2793351326" ], "abstract": [ "We develop a met alearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives---policies that are executed for large numbers of timesteps. Specifically, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy.", "We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and gains power and efficacy by decoupling end-to-end learning across multiple levels -- allowing it to utilise different resolutions of time. Our framework employs a Manager module and a Worker module. The Manager operates at a lower temporal resolution and sets abstract goals which are conveyed to and enacted by the Worker. The Worker generates primitive actions at every tick of the environment. The decoupled structure of FuN conveys several benefits -- in addition to facilitating very long timescale credit assignment it also encourages the emergence of sub-policies associated with different goals set by the Manager. These properties allow FuN to dramatically outperform a strong baseline agent on tasks that involve long-term credit assignment or memorisation. We demonstrate the performance of our proposed system on a range of tasks from the ATARI suite and also from a 3D DeepMind Lab environment.", "", "This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics--as a subroutine hierarchy--and a declarative semantics--as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this nonhierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.", "We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals. To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL. The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL. Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach.", "Continuous action policy search, the search for efficient policies in continuous control tasks, is currently the focus of intensive research driven both by the recent success of deep reinforcement learning algorithms and by the emergence of competitors based on evolutionary algorithms. In this paper, we present a broad survey of policy search methods, incorporating into a common big picture these very different approaches as well as alternatives such as Bayesian Optimization and directed exploration methods. The main message of this overview is in the relationship between the families of methods, but we also outline some factors underlying sample efficiency properties of the various approaches. Besides, to keep this survey as short and didactic as possible, we do not go into the details of mathematical derivations of the elementary algorithms." ] }