aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1711.00092 | 2741280230 | Online argumentative dialog is a rich source of information on popular beliefs and opinions that could be useful to companies as well as governmental or public policy agencies. Compact, easy to read, summaries of these dialogues would thus be highly valuable. A priori, it is not even clear what form such a summary should take. Previous work on summarization has primarily focused on summarizing written texts, where the notion of an abstract of the text is well defined. We collect gold standard training data consisting of five human summaries for each of 161 dialogues on the topics of Gay Marriage, Gun Control and Abortion. We present several different computational models aimed at identifying segments of the dialogues whose content should be used for the summary, using linguistic features and Word2vec features with both SVMs and Bidirectional LSTMs. We show that we can identify the most important arguments by using the dialog context with a best F-measure of 0.74 for gun control, 0.71 for gay marriage, and 0.67 for abortion. | Dialog Summarization. To the best of our knowledge, none of the previous approaches have focused on debate dialog summarization. Prior research on spoken dialog summarization has explored lexical features, and information specific to meetings such as action items, speaker status, and structural discourse features. @cite_8 @cite_46 @cite_47 @cite_18 @cite_22 . In contrast to information content, examine how social phenomena such as politeness level affect summarization. Emotional information has also been observed in summaries of professional chats discussing technology @cite_5 . Other approaches use semantic similarity metrics to identify the most central or important utterances of a spoken dialog using Switchboard corpus @cite_32 . Dialog structure and prosodic features have been studied for finding patterns of importance and opinion summarization on Switchboard conversations @cite_25 @cite_15 . Additional parallel work is on summarizing email thread conversations using conversational features and dialog acts specific to the email domain @cite_16 @cite_34 . | {
"cite_N": [
"@cite_47",
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_15",
"@cite_32",
"@cite_34",
"@cite_5",
"@cite_46",
"@cite_16",
"@cite_25"
],
"mid": [
"",
"1558643924",
"",
"2015895495",
"",
"2014571624",
"2250749132",
"2096452218",
"2044120473",
"2033839684",
"2168248941"
],
"abstract": [
"",
"This paper provides a progress report on ICSI s Meeting Project, including both the data collected and annotated as part of the pro-ject, as well as the research lines such materials support. We include a general description of the official ICSI Meeting Corpus , as currently available through the Linguistic Data Consortium, discuss some of the existing and planned annotations which augment the basic transcripts provided there, and describe several research efforts that make use of these materials. The corpus supports wide-ranging efforts, from low-level processing of the audio signal (including automatic speech transcription, speaker tracking, and work on far-field acoustics) to higher-level analyses of meeting structure, content, and interactions (such as topic and sentence segmentation, and automatic detection of dialogue acts and meeting hot spots ).",
"",
"Automatic summarization of open domain spoken dialogues is a new research area. This paper introduces the task, the challenges involved, and presents an approach to obtain automatic extract summaries for multi-party dialogues of four different genres, without any restriction on domain. We address the following issues which are intrinsic to spoken dialogue summarization and typically can be ignored when summarizing written text such as newswire data: (i) detection and removal of speech disfluencies; (ii) detection and insertion of sentence boundaries; (iii) detection and linking of cross-speaker information units (question-answer pairs). A global system evaluation using a corpus of 23 relevance annotated dialogues containing 80 topical segments shows that for the two more informal genres, our summarization system using dialogue specific components significantly outperforms a baseline using TFIDF term weighting with maximum marginal relevance ranking (MMR).",
"",
"We present a novel approach to spoken dialogue summarization. Our system employs a set of semantic similarity metrics using the noun portion of WordNet as a knowledge source. So far, the noun senses have been disambiguated manually. The algorithm aims to extract utterances carrying the essential content of dialogues. We evaluate the system on 20 Switchboard dialogues. The results show that our system outperforms LEAD, RANDOM and TF*IDF baselines.",
"In this paper, we present a novel supervised approach to the problem of summarizing email conversations and modeling dialogue acts. We assume that there is a relationship between dialogue acts and important sentences. Based on this assumption, we introduce a sequential graphical model approach which simultaneously summarizes email conversation and models dialogue acts. We compare our model with sequential and non-sequential models, which independently conduct the tasks of extractive summarization and dialogue act modeling. An empirical evaluation shows that our approach significantly outperforms all baselines in classifying correct summary sentences without losing performance on dialogue act modeling task.",
"This paper describes a summarization system for technical chats and emails on the Linux kernel. To reflect the complexity and sophistication of the discussions, they are clustered according to subtopic structure on the sub-message level, and immediate responding pairs are identified through machine learning methods. A resulting summary consists of one or more mini-summaries, each on a subtopic from the discussion.",
"We have explored the usefulness of incorporating speech and discourse features in an automatic speech summarization system applied to meeting recordings from the ICSI Meetings corpus. By analyzing speaker activity, turn-taking and discourse cues, we hypothesize that such a system can outperform solely text-based methods inherited from the field of text summarization. The summarization methods are described, two evaluation methods are applied and compared, and the results clearly show that utilizing such features is advantageous and efficient. Even simple methods relying on discourse cues and speaker activity can outperform text summarization approaches.",
"In this paper we describe research on summarizing conversations in the meetings and emails domains. We introduce a conversation summarization system that works in multiple domains utilizing general conversational features, and compare our results with domain-dependent systems for meeting and email data. We find that by treating meetings and emails as conversations with general conversational features in common, we can achieve competitive results with state-of-the-art systems that rely on more domain-specific features.",
"This paper presents a pilot study of opinion summarization on conversations. We create a corpus containing extractive and abstractive summaries of speaker's opinion towards a given topic using 88 telephone conversations. We adopt two methods to perform extractive summarization. The first one is a sentence-ranking method that linearly combines scores measured from different aspects including topic relevance, subjectivity, and sentence importance. The second one is a graph-based method, which incorporates topic and sentiment information, as well as additional information about sentence-to-sentence relations extracted based on dialogue structure. Our evaluation results show that both methods significantly outperform the baseline approach that extracts the longest utterances. In particular, we find that incorporating dialogue structure in the graph-based method contributes to the improved system performance."
]
} |
1711.00092 | 2741280230 | Online argumentative dialog is a rich source of information on popular beliefs and opinions that could be useful to companies as well as governmental or public policy agencies. Compact, easy to read, summaries of these dialogues would thus be highly valuable. A priori, it is not even clear what form such a summary should take. Previous work on summarization has primarily focused on summarizing written texts, where the notion of an abstract of the text is well defined. We collect gold standard training data consisting of five human summaries for each of 161 dialogues on the topics of Gay Marriage, Gun Control and Abortion. We present several different computational models aimed at identifying segments of the dialogues whose content should be used for the summary, using linguistic features and Word2vec features with both SVMs and Bidirectional LSTMs. We show that we can identify the most important arguments by using the dialog context with a best F-measure of 0.74 for gun control, 0.71 for gay marriage, and 0.67 for abortion. | Summarization. Document summarization is a mature area of NLP, and hence spans a vast range of approaches. The graph and clustering based systems compute sentence importance based on inter and intra-document sentence similarities @cite_19 @cite_35 @cite_28 . @cite_48 use a greedy approach based on Maximal Marginal Relevance. @cite_11 reformulated this as a dynamic programming problem providing a knapsack based solution. The submodular approach by @cite_12 produces a summary by maximizing an objective function that includes coverage and diversity. | {
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_48",
"@cite_19",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"",
"2083305840",
"1525595230",
"2144933361",
"2152992673"
],
"abstract": [
"",
"",
"This paper presents a method for combining query-relevance with information-novelty in the context of text retrieval and summarization. The Maximal Marginal Relevance (MMR) criterion strives to reduce redundancy while maintaining query relevance in re-ranking retrieved documents and in selecting apprw priate passages for text summarization. Preliminary results indicate some benefits for MMR diversity ranking in document retrieval and in single document summarization. The latter are borne out by the recent results of the SUMMAC conference in the evaluation of summarization systems. However, the clearest advantage is demonstrated in constructing non-redundant multi-document summaries, where MMR results are clearly superior to non-MMR passage selection.",
"In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications.",
"We design a class of submodular functions meant for document summarization tasks. These functions each combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity. Critically, our functions are monotone nondecreasing and submodular, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality. When evaluated on DUC 2004-2007 corpora, we obtain better than existing state-of-art results in both generic and query-focused document summarization. Lastly, we show that several well-established methods for document summarization correspond, in fact, to submodular function optimization, adding further evidence that submodular functions are a natural fit for document summarization.",
"In this work we study the theoretical and empirical properties of various global inference algorithms for multi-document summarization. We start by defining a general framework for inference in summarization. We then present three algorithms: The first is a greedy approximate method, the second a dynamic programming approach based on solutions to the knapsack problem, and the third is an exact algorithm that uses an Integer Linear Programming formulation of the problem. We empirically evaluate all three algorithms and show that, relative to the exact solution, the dynamic programming algorithm provides near optimal results with preferable scaling properties."
]
} |
1711.00138 | 2765615734 | Deep reinforcement learning (deep RL) agents have achieved remarkable success in a broad range of game-playing and continuous control tasks. While these agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study in three Atari 2600 environments. In particular, we focus on understanding agents in terms of their visual attentional patterns during decision making. To this end, we introduce a method for generating rich saliency maps and use it to explain 1) what strong agents attend to 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during the learning phase. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Our techniques are general and, though we focus on Atari, our long-term objective is to produce tools that explain any deep RL policy. | More recently, there has been work on analyzing execution traces of an RL agent in order to extract explanations @cite_21 . A problem with this approach is that it relies heavily on hand-crafted state features which are semantically meaningful to humans. This is impractical for vision-based applications, where agents must learn directly from pixels. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2594336441"
],
"abstract": [
"Shared expectations and mutual understanding are critical facets of teamwork. Achieving these in human-robot collaborative contexts can be especially challenging, as humans and robots are unlikely to share a common language to convey intentions, plans, or justifications. Even in cases where human co-workers can inspect a robot's control code, and particularly when statistical methods are used to encode control policies, there is no guarantee that meaningful insights into a robot's behavior can be derived or that a human will be able to efficiently isolate the behaviors relevant to the interaction. We present a series of algorithms and an accompanying system that enables robots to autonomously synthesize policy descriptions and respond to both general and targeted queries by human collaborators. We demonstrate applicability to a variety of robot controller types including those that utilize conditional logic, tabular reinforcement learning, and deep reinforcement learning, synthesizing informative policy descriptions for collaborators and facilitating fault diagnosis by non-experts."
]
} |
1711.00138 | 2765615734 | Deep reinforcement learning (deep RL) agents have achieved remarkable success in a broad range of game-playing and continuous control tasks. While these agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study in three Atari 2600 environments. In particular, we focus on understanding agents in terms of their visual attentional patterns during decision making. To this end, we introduce a method for generating rich saliency maps and use it to explain 1) what strong agents attend to 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during the learning phase. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Our techniques are general and, though we focus on Atari, our long-term objective is to produce tools that explain any deep RL policy. | Recent work by @cite_16 has developed tools for explaining deep RL policies in visual domains. Similar to our work, the authors use the Atari 2600 environments as interpretable testbeds. Their key contribution is a method of approximating the behavior of deep RL policies via Semi-Aggregated Markov Decision Processes (SAMDPs). They use the more interpretable SAMDPs to gain insights about the higher-level temporal structure of the policy. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2950708852"
],
"abstract": [
"In recent years there is a growing interest in using deep representations for reinforcement learning. In this paper, we present a methodology and tools to analyze Deep Q-networks (DQNs) in a non-blind matter. Using our tools we reveal that the features learned by DQNs aggregate the state space in a hierarchical fashion, explaining its success. Moreover we are able to understand and describe the policies learned by DQNs for three different Atari2600 games and suggest ways to interpret, debug and optimize of deep neural networks in Reinforcement Learning."
]
} |
1711.00138 | 2765615734 | Deep reinforcement learning (deep RL) agents have achieved remarkable success in a broad range of game-playing and continuous control tasks. While these agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study in three Atari 2600 environments. In particular, we focus on understanding agents in terms of their visual attentional patterns during decision making. To this end, we introduce a method for generating rich saliency maps and use it to explain 1) what strong agents attend to 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during the learning phase. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Our techniques are general and, though we focus on Atari, our long-term objective is to produce tools that explain any deep RL policy. | Gradient methods aim to understand what features of a DNN's input are most salient to its output by using variants of the chain rule. The simplest approach is to take the Jacobian with respect to the output of interest @cite_2 . Unfortunately, the Jacobian does not usually produce human-interpretable saliency maps. Thus several variants have emerged, aimed at modifying gradients to obtain more meaningful saliency. These variants include Guided Backpropagation @cite_12 , Excitation Backpropagation @cite_4 , and DeepLIFT @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_12",
"@cite_2"
],
"mid": [
"2952688545",
"2951260882",
"2123045220",
"2962851944"
],
"abstract": [
"The purported \"black box\"' nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. A detailed video tutorial on the method is at this http URL and code is at this http URL.",
"We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images.",
"Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]."
]
} |
1711.00138 | 2765615734 | Deep reinforcement learning (deep RL) agents have achieved remarkable success in a broad range of game-playing and continuous control tasks. While these agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study in three Atari 2600 environments. In particular, we focus on understanding agents in terms of their visual attentional patterns during decision making. To this end, we introduce a method for generating rich saliency maps and use it to explain 1) what strong agents attend to 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during the learning phase. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Our techniques are general and, though we focus on Atari, our long-term objective is to produce tools that explain any deep RL policy. | The idea behind perturbation-based methods is to measure how a model's output changes when some of the input information is altered. For a simple example, borrowed from @cite_1 , consider a classifier which predicts +1 if the image contains a robin and -1 otherwise. Removing information from the part of the image which contains the robin should change the model's output, whereas doing so for other areas should not. However, choosing a perturbation which removes information without introducing any information can be difficult. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2962981568"
],
"abstract": [
"As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks “look” in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations."
]
} |
1711.00138 | 2765615734 | Deep reinforcement learning (deep RL) agents have achieved remarkable success in a broad range of game-playing and continuous control tasks. While these agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study in three Atari 2600 environments. In particular, we focus on understanding agents in terms of their visual attentional patterns during decision making. To this end, we introduce a method for generating rich saliency maps and use it to explain 1) what strong agents attend to 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during the learning phase. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Our techniques are general and, though we focus on Atari, our long-term objective is to produce tools that explain any deep RL policy. | The simplest perturbation is to replace part of an input image with a gray square @cite_26 or region @cite_3 . A problem with this approach is that replacing pixels with a constant color introduces unwanted color and edge information. For example, adding a gray square might increase a classifier's confidence that the image contains an elephant. More recent approaches by @cite_9 and @cite_1 use masked interpolations between the original image @math and some other image @math , where @math is chosen to introduce as little new information as possible. | {
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_1",
"@cite_3"
],
"mid": [
"",
"2952186574",
"2962981568",
"2282821441"
],
"abstract": [
"",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks “look” in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.",
"Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted."
]
} |
1711.00199 | 2767032778 | Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at this https URL. | In feature-based methods, local features are extracted from either points of interest or every pixel in the image and matched to features on the 3D models to establish the 2D-3D correspondences, from which 6D poses can be recovered @cite_31 @cite_5 @cite_16 @cite_4 . Feature-based methods are able to handle occlusions between objects. However, they require sufficient textures on the objects in order to compute the local features. To deal with texture-less objects, several methods are proposed to learn feature descriptors using machine learning techniques @cite_27 @cite_30 . A few approaches have been proposed to directly regress to 3D object coordinate location for each pixel to establish the 2D-3D correspondences @cite_34 @cite_1 @cite_7 . But 3D coordinate regression encounters ambiguities in dealing with symmetric objects. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_34",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_16"
],
"mid": [
"2516059803",
"",
"2472269674",
"2953350888",
"132147841",
"1909903157",
"",
"",
"2951900634"
],
"abstract": [
"In this paper we tackle the problem of estimating the 3D pose of object instances, using convolutional neural networks. State of the art methods usually solve the challenging problem of regression in angle space indirectly, focusing on learning discriminative features that are later fed into a separate architecture for 3D pose estimation. In contrast, we propose an end-to-end learning framework for directly regressing object poses by exploiting Siamese Networks. For a given image pair, we enforce a similarity measure between the representation of the sample images in the feature and pose space respectively, that is shown to boost regression performance. Furthermore, we argue that our pose-guided feature learning using our Siamese Regression Network generates more discriminative features that outperform the state of the art. Last, our feature learning formulation provides the ability of learning features that can perform under severe occlusions, demonstrating high performance on our novel hand-object dataset.",
"",
"In recent years, the task of estimating the 6D pose of object instances and complete scenes, i.e. camera localization, from a single input image has received considerable attention. Consumer RGB-D cameras have made this feasible, even for difficult, texture-less objects and scenes. In this work, we show that a single RGB image is sufficient to achieve visually convincing results. Our key concept is to model and exploit the uncertainty of the system at all stages of the processing pipeline. The uncertainty comes in the form of continuous distributions over 3D object coordinates and discrete distributions over object labels. We give three technical contributions. Firstly, we develop a regularized, auto-context regression framework which iteratively reduces uncertainty in object coordinate and object label predictions. Secondly, we introduce an efficient way to marginalize object coordinate distributions over depth. This is necessary to deal with missing depth information. Thirdly, we utilize the distributions over object labels to detect multiple objects simultaneously with a fixed budget of RANSAC hypotheses. We tested our system for object pose estimation and camera localization on commonly used data sets. We see a major improvement over competing systems.",
"Analysis-by-synthesis has been a successful approach for many tasks in computer vision, such as 6D pose estimation of an object in an RGB-D image which is the topic of this work. The idea is to compare the observation with the output of a forward process, such as a rendered image of the object of interest in a particular pose. Due to occlusion or complicated sensor noise, it can be difficult to perform this comparison in a meaningful way. We propose an approach that \"learns to compare\", while taking these difficulties into account. This is done by describing the posterior density of a particular object pose with a convolutional neural network (CNN) that compares an observed and rendered image. The network is trained with the maximum likelihood paradigm. We observe empirically that the CNN does not specialize to the geometry or appearance of specific objects, and it can be used with objects of vastly different shapes and appearances, and in different backgrounds. Compared to state-of-the-art, we demonstrate a significant improvement on two different datasets which include a total of eleven objects, cluttered background, and heavy occlusion.",
"This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state of the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.",
"Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data.",
"",
"",
"We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal."
]
} |
1711.00199 | 2767032778 | Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at this https URL. | In this work, we combine the advantages of both template-based methods and feature-based methods in a deep learning framework, where the network combines bottom-up pixel-wise labeling with top-down object pose regression. Recently, the 6D object pose estimation problem has received more attention thanks to the competition in the Amazon Picking Challenge (APC). Several datasets and approaches have been introduced for the specific setting in the APC @cite_21 @cite_25 . Our network has the potential to be applied to the APC setting as long as the appropriate training data is provided. | {
"cite_N": [
"@cite_21",
"@cite_25"
],
"mid": [
"2221752211",
"2963678509"
],
"abstract": [
"An important logistics application of robotics involves manipulators that pick-and-place objects placed in warehouse shelves. A critical aspect of this task corresponds to detecting the pose of a known object in the shelf using visual data. Solving this problem can be assisted by the use of an RGBD sensor, which also provides depth information beyond visual data. Nevertheless, it remains a challenging problem since multiple issues need to be addressed, such as low illumination inside shelves, clutter, texture-less and reflective objects as well as the limitations of depth sensors. This letter provides a new rich dataset for advancing the state-of-the-art in RGBD-based 3D object pose estimation, which is focused on the challenges that arise when solving warehouse pick-and-place tasks. The publicly available dataset includes thousands of images and corresponding ground truth data for the objects used during the first Amazon Picking Challenge at different poses and clutter conditions. Each image is accompanied with ground truth information to assist in the evaluation of algorithms for object detection. To show the utility of the dataset, a recent algorithm for RGBD-based pose estimation is evaluated in this letter. Given the measured performance of the algorithm on the dataset, this letter shows how it is possible to devise modifications and improvements to increase the accuracy of pose estimation algorithms. This process can be easily applied to a variety of different methodologies for object pose detection and improve performance in the domain of warehouse pick-and-place.",
"Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http: apc.cs.princeton.edu"
]
} |
1711.00294 | 2766580344 | Crosstalk, also known by its Chinese name xiangsheng, is a traditional Chinese comedic performing art featuring jokes and funny dialogues, and one of China's most popular cultural elements. It is typically in the form of a dialogue between two performers for the purpose of bringing laughter to the audience, with one person acting as the leading comedian and the other as the supporting role. Though general dialogue generation has been widely explored in previous studies, it is unknown whether such entertaining dialogues can be automatically generated or not. In this paper, we for the first time investigate the possibility of automatic generation of entertaining dialogues in Chinese crosstalks. Given the utterance of the leading comedian in each dialogue, our task aims to generate the replying utterance of the supporting role. We propose a humor-enhanced translation model to address this task and human evaluation results demonstrate the efficacy of our proposed model. The feasibility of automatic entertaining dialogue generation is also verified. | The most closely related work is dialogue generation Previous work in this field relies on rule-based methods, from learning generation rules from a set of authored labels or rules @cite_1 @cite_22 to building statistical models based on templates or heuristic rules @cite_34 @cite_12 . li-EtAl:2017:EMNLP20175 After the explosive growth of social networks, the large amount of conversation data enables the data-driven approach to generate dialogue. Research on statistical dialogue systems fall into two categories: 1) information retrieval (IR) based methods @cite_6 , 2) the statistical machine translation (SMT) based methods @cite_8 . IR based methods aim to pick up suitable responses by ranking candidate responses. But there is an obvious drawback for these methods that the responses are selected from a fixed response set and it is not possible to produce new responses for special inputs. SMT based methods treat response generation as a SMT problem on post-response parallel data. These methods are purely data-driven and can generate new responses. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_34",
"@cite_12"
],
"mid": [
"2160458012",
"10957333",
"2004637830",
"295828404",
"",
"1604513301"
],
"abstract": [
"This system demonstration paper presents IRIS (Informal Response Interactive System), a chat-oriented dialogue system based on the vector space model framework. The system belongs to the class of example-based dialogue systems and builds its chat capabilities on a dual search strategy over a large collection of dialogue samples. Additional strategies allowing for system adaptation and learning implemented over the same vector model space framework are also described and discussed.",
"We present a data-driven approach to generating responses to Twitter status posts, based on phrase-based Statistical Machine Translation. We find that mapping conversational stimuli onto responses is more difficult than translating between languages, due to the wider range of possible responses, the larger fraction of unaligned words phrases, and the presence of large phrase pairs whose alignment cannot be further decomposed. After addressing these challenges, we compare approaches based on SMT and Information Retrieval in a human evaluation. We show that SMT outperforms IR on this task, and its output is preferred over actual human responses in 15 of cases. As far as we are aware, this is the first work to investigate the use of phrase-based SMT to directly translate a linguistic stimulus into an appropriate response.",
"The two current approaches to language generation, template-based and rule-based (linguistic) NLG, have limitations when applied to spoken dialogue systems, in part because they were developed for text generation. In this paper, we propose a new corpus-based approach to natural language generation, specifically designed for spoken dialogue systems.",
"Human computer conversation is regarded as one of the most difficult problems in artificial intelligence. In this paper, we address one of its key sub-problems, referred to as short text conversation, in which given a message from human, the computer returns a reasonable response to the message. We leverage the vast amount of short conversation data available on social media to study the issue. We propose formalizing short text conversation as a search problem at the first step, and employing state-of-the-art information retrieval (IR) techniques to carry out the task. We investigate the significance as well as the limitation of the IR approach. Our experiments demonstrate that the retrieval-based model can make the system behave rather \"intelligently\", when combined with a huge repository of conversation data from social media.",
"",
"In this paper we discuss the recent evolution of spoken dialog systems in commercial deployments. Yet based on a simple finite state machine design paradigm, dialog systems reached today a higher level of complexity. The availability of massive amounts of data during deployment led to the development of continuous optimization strategy pushing the design and development of spoken dialog applications from an art to science. At the same time new methods for evaluating the subjective caller experience are available. Finally we describe the inevitable evolution for spoken dialog applications from speech only to multimodal interaction."
]
} |
1711.00294 | 2766580344 | Crosstalk, also known by its Chinese name xiangsheng, is a traditional Chinese comedic performing art featuring jokes and funny dialogues, and one of China's most popular cultural elements. It is typically in the form of a dialogue between two performers for the purpose of bringing laughter to the audience, with one person acting as the leading comedian and the other as the supporting role. Though general dialogue generation has been widely explored in previous studies, it is unknown whether such entertaining dialogues can be automatically generated or not. In this paper, we for the first time investigate the possibility of automatic generation of entertaining dialogues in Chinese crosstalks. Given the utterance of the leading comedian in each dialogue, our task aims to generate the replying utterance of the supporting role. We propose a humor-enhanced translation model to address this task and human evaluation results demonstrate the efficacy of our proposed model. The feasibility of automatic entertaining dialogue generation is also verified. | More recently, neural network based methods are being applied in this field @cite_13 @cite_2 @cite_11 . In particular, model and reinforcement learning are used to improve the quality of generated responses @cite_19 . Adversarial learning are also applied in this field in recent years @cite_35 . @cite_32 introduced stochastic latent variable into RNN model into the response generation problem. Neural network based methods are promising for dialogue generation. However, as mentioned in section 2, training a neural network model requires a large corpus. Sometimes it is hard to obtain a large corpus in a specific domain, which limits their performance. Another kind of related work is computational humor. Humor recognition or computation in natural language is still a challenging task. Although understanding universal humor characteristics is almost impossible, there are many attempts to capture latent structure behind humor. Taylor used ontological semantics to detect humor. Yang identified several semantic structures behind humor and employed a computational approach to recognizing humor. Other studies also investigate humor with spoken or multimodal signals @cite_24 . But none of these works provide a systematical explanation of humor, not to mention recognizing humor in Chinese crosstalks. | {
"cite_N": [
"@cite_35",
"@cite_32",
"@cite_24",
"@cite_19",
"@cite_2",
"@cite_13",
"@cite_11"
],
"mid": [
"2581637843",
"2399880602",
"2038712753",
"2410983263",
"1847211030",
"889023230",
""
],
"abstract": [
"",
"Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.",
"We analyze humorous spoken conversations from a classic comedy television show, FRIENDS, by examining acoustic-prosodic and linguistic features and their utility in automatic humor recognition. Using a simple annotation scheme, we automatically label speaker turns in our corpus that are followed by laughs as humorous and the rest as non-humorous. Our humor-prosody analysis reveals significant differences in prosodic characteristics (such as pitch, tempo, energy etc.) of humorous and non-humorous speech, even when accounted for the gender and speaker differences. Humor recognition was carried out using standard supervised learning classifiers, and shows promising results significantly above the baseline.",
"Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.",
"In a conversation or a dialogue process, attention and intention play intrinsic roles. This paper proposes a neural network based approach that models the attention and intention processes. It essentially consists of three recurrent networks. The encoder network is a word-level model representing source side sentences. The intention network is a recurrent network that models the dynamics of the intention process. The decoder network is a recurrent network produces responses to the input from the source side. It is a language model that is dependent on the intention and has an attention mechanism to attend to particular source side words, when predicting a symbol in the response. The model is trained end-to-end without labeling data. Experiments show that this model generates natural responses to user inputs.",
"We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.",
""
]
} |
1710.11211 | 2765948580 | In this paper, we present a model set for designing human-robot collaboration (HRC) experiments. It targets a common scenario in HRC, which is the collaborative assembly of furniture, and it consists of a combination of standard components and custom designs. With this work, we aim at reducing the amount of work required to set up and reproduce HRC experiments, and we provide a unified framework to facilitate the comparison and integration of contributions to the field. The model set is designed to be modular, extendable, and easy to distribute. Importantly, it covers the majority of relevant research in HRC, and it allows tuning of a number of experimental variables that are particularly valuable to the field. Additionally, we provide a set of software libraries for perception, control and interaction, with the goal of encouraging other researchers to proactively contribute to the proposed work. | Historically, while benchmarking proved crucial to achieve scientific, replicable research, its applicability to robotics has been limited due to the complexity of the field. For this reason, previous work focused on establishing benchmarks in specific sub-fields of robotics research, e.g. manipulation @cite_8 , motion planning @cite_23 @cite_35 , navigation @cite_29 @cite_18 , service robotics @cite_9 @cite_36 or human-robot teamwork @cite_22 . Further, a number of robotic platforms have been designed with the specific purpose of mitigating the issue by fostering robotics research on shared hardware (e.g. PR2 @cite_11 , iCub @cite_7 , and Poppy @cite_2 ). Closer to this work are instead robot competitions @cite_9 @cite_39 @cite_15 @cite_46 , that propose a standard set of challenges to homogenize evaluation of robot performance. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_46",
"@cite_29",
"@cite_9",
"@cite_39",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_11"
],
"mid": [
"14178847",
"",
"",
"2161222115",
"",
"",
"",
"1659737799",
"2485838970",
"2397333330",
"1974733785",
"2006625514",
"",
""
],
"abstract": [
"Motion planning is a key problem in robotics that is concerned with finding a path that satisfies a goal specification subject to constraints. In its simplest form, the solution to this problem consists of finding a path connecting two states, and the only constraint is to avoid collisions. Even for this version of the motion planning problem, there is no efficient solution for the general case [1]. The addition of differential constraints on robot motion or more general goal specifications makes motion planning even harder. Given its complexity, most planning algorithms forego completeness and optimality for slightly weaker notions such as resolution completeness, probabilistic completeness [2], and asymptotic optimality.",
"",
"",
"We describe a humanoid robot platform - the iCub - which was designed to support collaborative research in cognitive development through autonomous exploration and social interaction. The motivation for this effort is the conviction that significantly greater impact can be leveraged by adopting an open systems policy for software and hardware development. This creates the need for a robust humanoid robot that offers rich perceptuo-motor capabilities with many degrees of freedom, a cognitive capacity for learning and development, a software architecture that encourages reuse & easy integration, and a support infrastructure that fosters collaboration and sharing of resources. The iCub satisfies all of these needs in the guise of an open-system platform which is freely available and which has attracted a growing community of users and developers. To date, twenty iCubs each comprising approximately 5000 mechanical and electrical parts have been delivered to several research labs in Europe and to one in the USA.",
"",
"",
"",
"From mundane and repetitive tasks to assisting first responders in saving lives of victims in disaster scenarios, robots are expected to play an important role in our lives in the coming years. Despite recent advances in mobile robotic systems, lack of widely accepted performance metrics and standards hinder the progress in many application areas such as manufacturing, healthcare, and search and rescue. In this paper, we outline the importance of the development of standardized methods and objective performance evaluation benchmarking of existing and emerging robotic technologies. We provide a survey of significant past efforts by researchers and developers around the globe and discuss how we can leverage such efforts in advancing the state-of-the-art. Using an example of designing a ‘standard’ evaluation toolkit for robotic mapping, we illustrate some of the problems faced in developing objective performance metrics whilst accommodating the requirements and restrictions imposed by the intended domain of operation and other practical considerations.",
"",
"At the Amazon Picking Challenge, 26 teams competed on their ability to pick items out of warehouse shelves. While the first year was largely focused on basic competencies, there are clear ways AI techniques can help make these systems more capable and robust.",
"Randomized planners, search-based planners, potential-field approaches and trajectory optimization based motion planners are just some of the types of approaches that have been developed for motion planning. Given a motion planning problem, choosing the appropriate algorithm to use is a daunting task even for experts since there has been relatively little effort in comparing the plans generated by the different approaches, for different problems. In this paper, we present a set of benchmarks and the associated infrastructure for comparing different types of motion planning approaches and algorithms. The benchmarks are specifically designed for robotics and include typical indoor human environments. We present example motion planning problems for single arm tasks. Our infrastructure is designed to be easily extensible to allow for the addition of new planning approaches, new robots, new environments and new metrics. We present results comparing the performance of several motion planning algorithms to validate the use of these benchmarks.",
"We introduce a novel humanoid robotic platform designed to jointly address three central goals of humanoid robotics: 1) study the role of morphology in biped locomotion; 2) study full-body compliant physical human-robot interaction; 3) be robust while easy and fast to duplicate to facilitate experimentation. The taken approach relies on functional modeling of certain aspects of human morphology, optimizing materials and geometry, as well as on the use of 3D printing techniques. In this article, we focus on the presentation of the design of specific morphological parts related to biped locomotion: the hip, the thigh, the limb mesh and the knee. We present initial experiments showing properties of the robot when walking with the physical guidance of a human.",
"",
""
]
} |
1710.11455 | 2767036397 | Recently, there has been significant interest in the integration and co-existence of Third Generation Partnership Project (3GPP) Long Term Evolution (LTE) with other Radio Access Technologies, like IEEE 802.11 Wireless Local Area Networks (WLANs). Although, the inter-working of IEEE 802.11 WLANs with 3GPP LTE has indicated enhanced network performance in the context of capacity and load balancing, the WLAN discovery scheme implemented in most of the commercially available smartphones is very inefficient and results in high battery drainage. In this paper, we have proposed an energy efficient WLAN discovery scheme for 3GPP LTE and IEEE 802.11 WLAN inter-working scenario. User Equipment (UE), in the proposed scheme, uses 3GPP network assistance along with the results of past channel scans, to optimally select the next channels to scan. Further, we have also developed an algorithm to accurately estimate the UE's mobility state, using 3GPP network signal strength patterns. We have implemented various discovery schemes in Android framework, to evaluate the performance of our proposed scheme against other solutions in the literature. Since, Android does not support selective scanning mode, we have implemented modules in Android to enable selective scanning. Further, we have also used simulation studies and justified the results using power consumption modeling. The results from the field experiments and simulations have shown high power savings using the proposed scanning scheme without any discovery performance deterioration. | UE position estimation solutions @cite_3 @cite_18 based on 3GPP network signal strength require accurate knowledge of path loss characteristics and extensive training data set for each location. The authors, in @cite_20 , have proposed relatively simple algorithm for mobility estimation. The algorithm uses the count of number of base stations for which signal strength crosses a specific threshold. The authors address the issue of signal strength fluctuations by computing the mean and variance of signal strength for each location, which is then used for mobility estimation. The solution assumes availability of significant data for variance computation at each location, which is not possible for a high speed user. Hence, an enhanced solution is required for mobility estimation of the UE, which also takes into account the signal strength fluctuations observed by a mobile UE. | {
"cite_N": [
"@cite_18",
"@cite_20",
"@cite_3"
],
"mid": [
"",
"2148253973",
"2078676328"
],
"abstract": [
"",
"Recently commercial mobile phones have been shipped with integrated Wi-Fi NIC (Network Interface Card), while a fundamental barrier for easily using such Wi-Fi is its high energy cost for phone. We profile two integrated mobile phone Wi-Fi NICs and observe that the energy cost on PSM (Power Saving Mode) is greatly reduced, yet the scan state still costs most of the energy when it's not connected on discovering potential AP (Access Point). In this paper, we propose footprint, leveraging the cellular information such as the overheard cellular tower IDs and signal strength, to guild the Wi-Fi scan and thus greatly reduce the number of unnecessary scans through the changes and history logs of the mobile user's locations. Our experiments and implementation demonstrate that our scheme effectively saves energy for mobile phone integrated with Wi-Fi.",
"Location based services (LBS) have generated a lot of interest in recent years, in the era of significant telecommunications competition. Mobile network operators continuously seek new and innovative ways to offer new services and increase the profit. In the days to come, location based service will be benefiting both the consumers and network operators. Mobile operators should consider various aspects when offering LBS, including network technology evolution, standardization, user acceptance, and the availability of attractive services. Many location-based services require the involvement of many different parties in order to provide the value added service. This paper presents current trends and requirements for LBS implementation in GSM, GPRS and UMTS networks."
]
} |
1710.11332 | 2765122094 | Recently, encoder-decoder models are widely used in social media text summarization. However, these models sometimes select noise words in irrelevant sentences as part of a summary by error, thus declining the performance. In order to inhibit irrelevant sentences and focus on key information, we propose an effective approach by learning sentence weight distribution. In our model, we build a multi-layer perceptron to predict sentence weights. During training, we use the ROUGE score as an alternative to the estimated sentence weight, and try to minimize the gap between estimated weights and predicted weights. In this way, we encourage our model to focus on the key sentences, which have high relevance with the summary. Experimental results show that our approach outperforms baselines on a large-scale social media corpus. | Summarization approaches can be divided into two typical categories: extractive summarization @cite_16 @cite_11 @cite_4 @cite_5 @cite_6 and abstractive summarization @cite_15 @cite_14 @cite_3 @cite_12 @cite_1 . For extractive summarizations, most works usually select several sentences from a document as a summary or a headline. For abstractive summarization, most works usually encode a document into an abstractive representation and then generate words in a summary one by one. Most social media summarization systems belong to abstractive text summarizaition. Generally speaking, extractive summarization achieves better performance than abstractive summarization for long and normal documents. However, extractive summarization is not suitable for social media text which are full of noises and very short. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2963241389",
"2112077341",
"2964165364",
"2307381258",
"1843891098",
"",
"2133182690",
"1602831581",
"609399965",
""
],
"abstract": [
"We propose an abstraction-based multidocument summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun verb phrases. Different from existing abstraction-based approaches, our method first constructs a pool of concepts and facts represented by phrases from the input documents. Then new sentences are generated by selecting and merging informative phrases to maximize the salience of phrases and meanwhile satisfy the sentence construction constraints. We employ integer linear optimization for conducting phrase selection and merging simultaneously in order to achieve the global optimal solution for a summary. Experimental results on the benchmark data set TAC 2011 show that our framework outperforms the state-ofthe-art models under automated pyramid evaluation metric, and achieves reasonably well results on manual linguistic quality evaluation.",
"In this paper we present a joint content selection and compression model for single-document summarization. The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. We evaluate the approach on the task of generating \"story highlights\"---a small number of brief, self-contained sentences that allow readers to quickly gather information on news stories. Experimental results show that the model's output is comparable to human-written highlights in terms of both grammaticality and content.",
"",
"Traditional approaches to extractive summarization rely heavily on humanengineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features. We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor. This architecture allows us to develop different classes of summarization models which can extract sentences or words. We train our models on large scale corpora containing hundreds of thousands of document-summary pairs 1 . Experimental results on two summarization datasets demonstrate that our models obtain results comparable to the state of the art without any access to linguistic annotation.",
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"",
"When humans produce summaries of documents, they do not simply extract sentences and concatenate them. Rather, they create new sentences that are grammatical, that cohere with one another, and that capture the most salient pieces of information in the original document. Given that large collections of text abstract pairs are available online, it is now possible to envision algorithms that are trained to mimic this process. In this paper, we focus on sentence compression, a simpler version of this larger challenge. We aim to achieve two goals simultaneously: our compressions should be grammatical, and they should retain the most important pieces of information. These two goals can conflict. We devise both a noisy-channel and a decision-tree approach to the problem, and we evaluate results against manual compressions and a simple baseline.",
"Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.",
"Automatic text summarization is widely regarded as the highly difficult problem, partially because of the lack of large text summarization data set. Due to the great challenge of constructing the large scale summaries for full text, in this paper, we introduce a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which is released to the public this http URL . This corpus consists of over 2 million real Chinese short texts with short summaries given by the author of each text. We also manually tagged the relevance of 10,666 short summaries with their corresponding short texts. Based on the corpus, we introduce recurrent neural network for the summary generation and achieve promising results, which not only shows the usefulness of the proposed corpus for short text summarization research, but also provides a baseline for further research on this topic.",
""
]
} |
1710.11332 | 2765122094 | Recently, encoder-decoder models are widely used in social media text summarization. However, these models sometimes select noise words in irrelevant sentences as part of a summary by error, thus declining the performance. In order to inhibit irrelevant sentences and focus on key information, we propose an effective approach by learning sentence weight distribution. In our model, we build a multi-layer perceptron to predict sentence weights. During training, we use the ROUGE score as an alternative to the estimated sentence weight, and try to minimize the gap between estimated weights and predicted weights. In this way, we encourage our model to focus on the key sentences, which have high relevance with the summary. Experimental results show that our approach outperforms baselines on a large-scale social media corpus. | Neural abstractive text summarization is a newly proposed method and has become a hot research topic in recent years. Unlike the traditional summarization systems which consist of many small sub-components that are tuned separately @cite_15 @cite_10 @cite_17 , neural abstractive text summarization attempts to build and train a single, large neural network that reads a document and outputs a correct summary. first introduced the encoder-decoder framework with the attention mechanism to abstractive text summarization. proposed an abstraction-based multi-document summarization framework which can construct new sentences by exploring more fine-grained syntactic units than sentences. proposed a copy mechanism to address the problem of unknown words. proposed several novel models to address critical problems in summarization. | {
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"2133182690",
"2110693578",
"2053818817"
],
"abstract": [
"When humans produce summaries of documents, they do not simply extract sentences and concatenate them. Rather, they create new sentences that are grammatical, that cohere with one another, and that capture the most salient pieces of information in the original document. Given that large collections of text abstract pairs are available online, it is now possible to envision algorithms that are trained to mimic this process. In this paper, we focus on sentence compression, a simpler version of this larger challenge. We aim to achieve two goals simultaneously: our compressions should be grammatical, and they should retain the most important pieces of information. These two goals can conflict. We devise both a noisy-channel and a decision-tree approach to the problem, and we evaluate results against manual compressions and a simple baseline.",
"We introduce a stochastic graph-based method for computing relative importance of textual units for Natural Language Processing. We test the technique on the problem of Text Summarization (TS). Extractive TS relies on the concept of sentence salience to identify the most important sentences in a document or set of documents. Salience is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. In this model, a connectivity matrix based on intra-sentence cosine similarity is used as the adjacency matrix of the graph representation of sentences. Our system, based on LexRank ranked in first place in more than one task in the recent DUC 2004 evaluation. In this paper we present a detailed analysis of our approach and apply it to a larger data set including data from earlier DUC evaluations. We discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. Furthermore, the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. We also show that our approach is quite insensitive to the noise in the data that may result from an imperfect topical clustering of documents.",
"One of the important Natural Language Processing applications is Text Summarization, which helps users to manage the vast amount of information available, by condensing documents' content and extracting the most relevant facts or topics included. Text Summarization can be classified according to the type of summary: extractive, and abstractive. Extractive summary is the procedure of identifying important sections of the text and producing them verbatim while abstractive summary aims to produce important material in a new generalized form. In this paper, a novel approach is presented to create an abstractive summary for a single document using a rich semantic graph reducing technique. The approach summaries the input document by creating a rich semantic graph for the original document, reducing the generated graph, and then generating the abstractive summary from the reduced graph. Besides, a simulated case study is presented to show how the original text was minimized to fifty percent."
]
} |
1710.11527 | 2766619108 | User distribution in ultra-dense networks (UDNs) plays a crucial role in affecting the performance of UDNs due to the essential coupling between the traffic and the service provided by the networks. Existing studies are mostly based on the assumption that users are uniformly distributed in space. The non-uniform user distribution has not been widely considered despite that it is much closer to the real scenario. In this paper, Radiation and Absorbing model (R&A model) is first adopted to analyze the impact of the non-uniformly distributed users on the performance of 5G UDNs. Based on the R&A model and queueing network theory, the stationary user density in each hot area is investigated. Furthermore, the coverage probability, network throughput and energy efficiency are derived based on the proposed theoretical model. Compared with the uniformly distributed assumption, it is shown that non-uniform user distribution has a significant impact on the performance of UDNs. | The fifth generation (5G) mobile communication systems are envisaged to provide a 1000 times enhancement of the network capacity while achieving a much higher energy efficiency compared with the fourth generation (4G) mobile communication systems. The ambitious aims of 5G mobile communication systems bring both opportunities and challenges to researchers all over the world @cite_3 . The UDNs are regarded as one of the key technologies for 5G mobile communication systems @cite_30 . The main difference between UDNs and heterogeneous networks (HetNets) lies in the dramatic increase of small cell base station (SBS) density. The distances between users and SBSs are greatly reduced with the increase of the SBS density, hence more wireless links are available for users in wireless networks to enhance the quality of service (QoS) @cite_25 . On the other hand, UDNs also suffer from the increasing energy consumption with the massive deployment of SBSs. Therefore, one of the core problems for deploying UDNs is the optimization of SBS density to meet the traffic demand in an energy efficient way in hot spot areas. | {
"cite_N": [
"@cite_30",
"@cite_25",
"@cite_3"
],
"mid": [
"2054692642",
"2261198379",
""
],
"abstract": [
"What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.",
"With small cell networks becoming core parts of the fifth generation (5G) cellular networks, it is an important problem to evaluate the impact of user mobility on 5G small cell networks. However, the tendency and clustering habits in human activities have not been considered in traditional user mobility models. In this paper, human tendency and clustering behaviors are first considered to evaluate the user mobility performance for 5G small cell networks based on individual mobility model (IMM). As key contributions, user pause probability, user arrival, and departure probabilities are derived in this paper for evaluating the user mobility performance in a hotspot-type 5G small cell network. Furthermore, coverage probabilities of small cell and macro cell BSs are derived for all users in 5G small cell networks, respectively. Compared with the traditional random waypoint (RWP) model, IMM provides a different viewpoint to investigate the impact of human tendency and clustering behaviors on the performance of 5G small cell networks.",
""
]
} |
1710.11381 | 2766998272 | In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths. | Learned latent representations often allow for vector space arithmetic to translate to semantic operations in the data space @cite_5 @cite_20 . Early observations showing that the latent space of a GAN finds semantic directions in the data space (e.g. corresponding to eyeglasses and smiles) were made in @cite_5 . Recent work has also focused on learning better similarity metrics @cite_14 or providing a finer semantic decomposition of the latent space @cite_21 . As a consequence, the evaluation of current GAN models is often done by sampling pair of points and linear interpolating between them in the latent space, or performing other types of noise vector arithmetic @cite_15 . This results in sampling the latent space from locations with very low probability mass. This observation was also made in @cite_13 who suggested replacing linear interpolation with spherical linear interpolation which prevents diverging from the model's prior distribution. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"2202109488",
"2618104702",
"2173520492",
"2737057113",
"2567627528",
""
],
"abstract": [
"We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.",
"We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce pairs that are photorealistic, distinct, and appear to depict the same individual. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm's ability to generate convincing, identity-matched photographs.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Generative Adversarial Networks (GANs) have been shown to be able to sample impressively realistic images. GAN training consists of a saddle point optimization problem that can be thought of as an adversarial game between a generator which produces the images, and a discriminator, which judges if the images are real. Both the generator and the discriminator are commonly parametrized as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of the optimization procedure and the network parametrization to the success of GANs. To this end we introduce and study Generative Latent Optimization (GLO), a framework to train a generator without the need to learn a discriminator, thus avoiding challenging adversarial optimization problems. We show experimentally that GLO enjoys many of the desirable properties of GANs: learning from large data, synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors.",
"We introduce several techniques for sampling and visualizing the latent spaces of generative models. Replacing linear interpolation with spherical linear interpolation prevents diverging from a model's prior distribution and produces sharper samples. J-Diagrams and MINE grids are introduced as visualizations of manifolds created by analogies and nearest neighbors. We demonstrate two new techniques for deriving attribute vectors: bias-corrected vectors with data replication and synthetic vectors with data augmentation. Binary classification using attribute vectors is presented as a technique supporting quantitative analysis of the latent space. Most techniques are intended to be independent of model type and examples are shown on both Variational Autoencoders and Generative Adversarial Networks.",
""
]
} |
1710.10899 | 2963202018 | We present the submatrix method, a highly parallelizable method for the approximate calculation of inverse p-th roots of large sparse symmetric matrices which are required in different scientific applications. Following the idea of Approximate Computing, we allow imprecision in the final result in order to utilize the sparsity of the input matrix and to allow massively parallel execution. For an n x n matrix, the proposed algorithm allows to distribute the calculations over n nodes with only little communication overhead. The result matrix exhibits the same sparsity pattern as the input matrix, allowing for efficient reuse of allocated data structures. We evaluate the algorithm with respect to the error that it introduces into calculated results, as well as its performance and scalability. We demonstrate that the error is relatively limited for well-conditioned matrices and that results are still valuable for error-resilient applications like preconditioning even for ill-conditioned matrices. We discuss the execution time and scaling of the algorithm on a theoretical level and present a distributed implementation of the algorithm using MPI and OpenMP. We demonstrate the scalability of this implementation by running it on a high-performance compute cluster comprised of 1024 CPU cores, showing a speedup of 665x compared to single-threaded execution. | In literature, several approaches can be found to parallelize matrix inversion or calculation of LU and SV decompositions. For example, Van der @cite_7 present an algorithm for parallel calculation of the LU decomposition on a mesh network of transputers where each processor holds a part of the matrix. Shen @cite_13 evaluates techniques for LU decomposition distributed over nodes that are connected via slow message passing. @cite_17 demonstrate an optimized implementation of matrix inversion on a single multicore node, focusing on the minimization of synchronization between the different processing cores. There are also algorithms specialized on specific applications, such as the one described by @cite_2 which can be used in 2D electronic structure calculations to only calculate selected parts of the inverse of a sparse matrix. For parallel calculation of the SVD, @cite_8 provide an extensive overview of parallelizable methods. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_2",
"@cite_13",
"@cite_17"
],
"mid": [
"1978421859",
"",
"2121807804",
"2138385963",
"2061781302"
],
"abstract": [
"A parallel algorithm is presented for the LU decomposition of a general sparse matrix on a distributed-memory MIMD multiprocessor with a square mesh communication network. In the algorithm, matrix elements are assigned to processors according to the grid distribution. Each processor represents the nonzero elements of its part of the matrix by a local, ordered, two-dimensional linked-list data structure. The complexity of important operations on this data structure and on several others is analysed. At each step of the algorithm, a parallel search for a set of m compatible pivot elements is performed. The Markowitz counts of the pivot elements are close to minimum, to preserve the sparsity of the matrix. The pivot elements also satisfy a threshold criterion, to ensure numerical stability. The compatibility of the m pivots enables the simultaneous elimination of m pivot rows and m pivot columns in a rank-m update of the reduced matrix. Experimental results on a network of 400 transputers are presented for a...",
"",
"An efficient parallel algorithm is presented for computing selected components of @math where @math is a structured symmetric sparse matrix. Calculations of this type are useful for several applications, including electronic structure analysis of materials in which the diagonal elements of the Green's functions are needed. The algorithm proposed here is a direct method based on a block @math factorization. The selected elements of @math we compute lie in the nonzero positions of @math . We use the elimination tree associated with the block @math factorization to organize the parallel algorithm, and reduce the synchronization overhead by passing the data level by level along this tree using the technique of local buffers and relative indices. We demonstrate the efficiency of our parallel implementation by applying it to a discretized two dimensional Hamiltonian matrix. We analyze the performance of the parallel algorithm by examining its load balance and communication overhead, and show that our parallel implementation exhibits an excellent weak scaling on a large-scale high performance distributed-memory parallel machine.",
"Several message passing-based parallel solvers have been developed for general (non-symmetric) sparse LU factorization with partial pivoting. Existing solvers were mostly deployed and evaluated on parallel computing platforms with high message passing performance (e.g., 1-10 µs in message latency and 100-1000Mbytes s in message throughput) while little attention has been paid on slower platforms. This paper investigates techniques that are specifically beneficial for LU factorizafion on platforms with slow message passing. In the context of the S+ distributed memory solver, we find that significant reduction in the application message passing overhead can be attained at the cost of extra computation and slightly weakened numerical stability. In particular, we propose batch pivoting to make pivot selections in groups through speculative factorization, and thus substantially decrease the inter-processor synchronization granularity. We experimented on three different message passing platforms with different communication speeds. While the proposed techniques provide no performance benefit and even slightly weaken numerical stability on an IBM Regatta multiprocessor with fast message passing, they improve the performance of our test matrices by 15-460 on an Ethernet-connected 16-node PC cluster. Given the different tradeoffs of communication-reduction techniques on different message passing platforms, we also propose a sampling-based runtime application adaptation approach that automatically determines whether these techniques should be employed for a given platform and input matrix.",
"The goal of this paper is to present an efficient implementation of an explicit matrix inversion of general square matrices on multicore computer architecture. The inversion procedure is split into four steps: 1) computing the LU factorization, 2) inverting the upper triangular U factor, 3) solving a linear system, whose solution yields inverse of the original matrix and 4) applying backward column pivoting on the inverted matrix. Using a tile data layout, which represents the matrix in the system memory with an optimized cache-aware format, the computation of the four steps is decomposed into computational tasks. A directed acyclic graph is generated on the fly which represents the program data flow. Its nodes represent tasks and edges the data dependencies between them. Previous implementations of matrix inversions, available in the state-of-the-art numerical libraries, are suffer from unnecessary synchronization points, which are non-existent in our implementation in order to fully exploit the parallelism of the underlying hardware. Our algorithmic approach allows to remove these bottlenecks and to execute the tasks with loose synchronization. A runtime environment system called QUARK is necessary to dynamically schedule our numerical kernels on the available processing units. The reported results from our LU-based matrix inversion implementation significantly outperform the state-of-the-art numerical libraries such as LAPACK (5x), MKL (5x) and ScaLAPACK (2.5x) on a contemporary AMD platform with four sockets and the total of 48 cores for a matrix of size 24000. A power consumption analysis shows that our high performance implementation is also energy efficient and substantially consumes less power than its competitors."
]
} |
1710.10899 | 2963202018 | We present the submatrix method, a highly parallelizable method for the approximate calculation of inverse p-th roots of large sparse symmetric matrices which are required in different scientific applications. Following the idea of Approximate Computing, we allow imprecision in the final result in order to utilize the sparsity of the input matrix and to allow massively parallel execution. For an n x n matrix, the proposed algorithm allows to distribute the calculations over n nodes with only little communication overhead. The result matrix exhibits the same sparsity pattern as the input matrix, allowing for efficient reuse of allocated data structures. We evaluate the algorithm with respect to the error that it introduces into calculated results, as well as its performance and scalability. We demonstrate that the error is relatively limited for well-conditioned matrices and that results are still valuable for error-resilient applications like preconditioning even for ill-conditioned matrices. We discuss the execution time and scaling of the algorithm on a theoretical level and present a distributed implementation of the algorithm using MPI and OpenMP. We demonstrate the scalability of this implementation by running it on a high-performance compute cluster comprised of 1024 CPU cores, showing a speedup of 665x compared to single-threaded execution. | Implementations for the calculation of the LU and SV decompositions and matrix inversion are part of LAPACK @cite_28 , a popular software library for numerical linear algebra. For solving large sparse systems and calculation of singular values, ARPACK @cite_9 is a well known library which is based on the Arnoldi iteration . There exist different implementations of these libraries, as well as bindings for many different programming languages. With ScaLAPACK @cite_30 and P @cite_25 , there exist extensions of these libraries targeting parallel execution on distributed memory systems using MPI for message passing. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_25",
"@cite_30"
],
"mid": [
"1964477602",
"1506690472",
"",
"2117293168"
],
"abstract": [
"The goal of the LAPACK project is to design and implement a portable linear algebra library for efficient use on a variety of high-performance computers. The library is based on the widely used LINPACK and EISPACK packages for solving linear equations, eigenvalue problems, and linear least-squares problems, but extends their functionality in a number of ways. The major methodology for making the algorithms run faster is to restructure them to perform block matrix operations (e.g., matrix-matrix multiplication) in their inner loops. These block operations may be optimized to exploit the memory hierarchy of a specific architecture. The LAPACK project is also working on new algorithms that yield higher relative accuracy for a variety of linear algebra problems. >",
"List of figures List of tables Preface 1. Introduction to ARPACK. Important features Getting started Reverse communication interface Availability Installation Documentation Dependence on LAPACK and BLAS Expected performance P_ARPACK Contributed additions Trouble shooting and problems 2. Getting started with ARPACK. Directory structure and contents Getting started An example for a symmetric Eigenvalue problem 3. General use of ARPACK. Naming conventions, Precisions, and types Shift and invert spectral transformation mode Reverse communication structure for shift-Invert Using the computational modes Computational modes for real symmetric problems Postprocessing for Eigenvectors using dseupd Computational modes for real nonsymmetric problems Postprocessing for Eigenvectors Using dneupd Computational modes for complex problems Postprocessing for Eigenvectors Using zneupd 4. The implicitly restarted Arnoldi method: structure of the Eigenvalue problem Krylov subspaces and projection methods The Arnoldi factorization Restarting the Arnoldi method The generalized Eigenvalue problem Stopping Criterion 5. Computational routines. ARPACK subroutines LAPACK routines used by ARPACK BLAS routines used by ARPACK Appendix A. Templates and driver routines Symmetric drivers Real Nonsymmetric drivers Complex drivers Band drivers The singular value decomposition Appendix B. Tracking the progress of ARPACK. Obtaining trace output Check-pointing ARPACK Appendix C. The XYaupd ARPACK Routines. DSAUPD DNAUPD ZNAUPD Bibliography Index.",
"",
"This paper outlines the content and performance of ScaLAPACK, a collection of mathematical software for linear algebra computations on distributed memory computers. The importance of developing standards for computational and message passing interfaces is discussed. We present the different components and building blocks of ScaLAPACK, and indicate the difficulties inherent in producing correct codes for networks of heterogeneous processors. Finally, this paper briefly describes future directions for the ScaLAPACK library and concludes by suggesting alternative approaches to mathematical libraries, explaining how ScaLAPACK could be integrated into efficient and user-friendly distributed systems."
]
} |
1710.10836 | 2767004781 | This article presents fraudulent activity that are done by the unscrupulous desire of people to make the personal benefits by manipulating the tax in taxing system. Taxpayers manipulate the money paid to the tax authorities through avoidance and evasion activities. In this paper, we deal with a specific technique used by the tax-evaders known as a circular trading. We define an algorithm for detection and analysis of circular trade. To detect these circular trade, we have modeled whole system as a directed graph with actors being vertices and the transactions among them as directed edges. We have proposed an algorithm for detecting these circular trade. The commercial tax dataset is given by Telangana, India. This dataset contains the transaction details of participants involved in a known circular trade. | Most of the work on @math are concentrated on stock market trading. In @cite_0 , @cite_5 , @cite_6 , @cite_2 and @cite_14 the authors have investigated on @math and other related collusion techniques used in stock market trading. A brief overview on some of these techniques is given below. | {
"cite_N": [
"@cite_14",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"2093131651",
"",
"2016647770",
"2011153821",
"1542569948"
],
"abstract": [
"Market manipulation remains the biggest concern of investors in today's securities market, despite fast and strict responses from regulators and exchanges to market participants that pursue such practices. The existing methods in the industry for detecting fraudulent activities in securities market rely heavily on a set of rules based on expert knowledge. The securities market has deviated from its traditional form due to new technologies and changing investment strategies in the past few years. The current securities market demands scalable machine learning algorithms supporting identification of market manipulation activities. In this paper we use supervised learning algorithms to identify suspicious transactions in relation to market manipulation in stock market. We use a case study of manipulated stocks during 2003. We adopt CART, conditional inference trees, C5.0, Random Forest, Naive Bayes, Neural Networks, SVM and kNN for classification of manipulated samples. Empirical results show that Naive Bayes outperform other learning methods achieving F 2 measure of 53 (sensitivity and specificity are 89 and 83 respectively).",
"",
"Many mal-practices in stock market trading--e.g., circular trading and price manipulation--use the modus operandi of collusion. Informally, a set of traders is a candidate collusion set when they have \"heavy trading\" among themselves, as compared to their trading with others. We formalize the problem of detection of collusion sets, if any, in the given trading database. We show that naive approaches are inefficient for real-life situations. We adapt and apply two well-known graph clustering algorithms for this problem. We also propose a new graph clustering algorithm, specifically tailored for detecting collusion sets. A novel feature of our approach is the use of Dempster---Schafer theory of evidence to combine the candidate collusion sets detected by individual algorithms. Treating individual experiments as evidence, this approach allows us to quantify the confidence (or belief) in the candidate collusion sets. We present detailed simulation experiments to demonstrate effectiveness of the proposed algorithms.",
"In financial markets, abnormal trading behaviors pose a serious challenge to market surveillance and risk management. What is worse, there is an increasing emergence of abnormal trading events that some experienced traders constitute a collusive clique and collaborate to manipulate some instruments, thus mislead other investors by applying similar trading behaviors for maximizing their personal benefits. In this paper, a method is proposed to detect the potential collusive cliques involved in an instrument of future markets by first calculating the correlation coefficient between any two eligible unified aggregated time series of signed order volume, and then combining the connected components from multiple sparsified weighted graphs constructed by using the correlation matrices where each correlation coefficient is over a user-specified threshold. Experiments conducted on real order data from the Shanghai Futures Exchange show that the proposed method can effectively detect suspect collusive cliques, which have been verified by financial experts. A tool based on the proposed method has been deployed in the exchange as a pilot application for futures market surveillance and risk management.",
"In this paper, we analyze the trading behavior of users in an experimental stock market with a special emphasis on irregularities within the set of regular trading operations. To this end the market is represented as a graph of traders that are connected by their transactions. Our analysis is executed from two perspectives: On a micro scale view fraudulent transactions between traders are introduced and described in terms of the patterns they typically produce in the market’s graph representation. On a macro scale, we use a spectral clustering method based on the eigensystem of complex Hermitian adjacency matrices to characterize the trading behavior of the traders and thus characterize the market. Thereby, we can show the gap between the formal definition of the market and the actual behavior within the market where deviations from the allowed trading behavior can be made visible. These questions are for instance relevant with respect to the forecast efficiency of experimental stock markets since manipulations tend to decrease the precision of the market’s results. To demonstrate this we show some results of the analysis of a political stock market that was set up for the 2006 state parliament elections in Baden-Wuerttemberg, Germany."
]
} |
1710.10836 | 2767004781 | This article presents fraudulent activity that are done by the unscrupulous desire of people to make the personal benefits by manipulating the tax in taxing system. Taxpayers manipulate the money paid to the tax authorities through avoidance and evasion activities. In this paper, we deal with a specific technique used by the tax-evaders known as a circular trading. We define an algorithm for detection and analysis of circular trade. To detect these circular trade, we have modeled whole system as a directed graph with actors being vertices and the transactions among them as directed edges. We have proposed an algorithm for detecting these circular trade. The commercial tax dataset is given by Telangana, India. This dataset contains the transaction details of participants involved in a known circular trade. | In @cite_0 , a graph clustering algorithm is devised for detecting collusion sets in stock markets. A novel feature of this approach is the use of Dempster–Schafer theory of evidence to combine the candidate collusion sets. In @cite_2 , a method is proposed to detect the potential collusive cliques involved in an instrument of future markets. In @cite_14 , the authors introduced complicity functions, which are capable of identifying the intermediaries in a group of actors, avoiding core elements that have nothing to do with the group. | {
"cite_N": [
"@cite_0",
"@cite_14",
"@cite_2"
],
"mid": [
"2016647770",
"2093131651",
"2011153821"
],
"abstract": [
"Many mal-practices in stock market trading--e.g., circular trading and price manipulation--use the modus operandi of collusion. Informally, a set of traders is a candidate collusion set when they have \"heavy trading\" among themselves, as compared to their trading with others. We formalize the problem of detection of collusion sets, if any, in the given trading database. We show that naive approaches are inefficient for real-life situations. We adapt and apply two well-known graph clustering algorithms for this problem. We also propose a new graph clustering algorithm, specifically tailored for detecting collusion sets. A novel feature of our approach is the use of Dempster---Schafer theory of evidence to combine the candidate collusion sets detected by individual algorithms. Treating individual experiments as evidence, this approach allows us to quantify the confidence (or belief) in the candidate collusion sets. We present detailed simulation experiments to demonstrate effectiveness of the proposed algorithms.",
"Market manipulation remains the biggest concern of investors in today's securities market, despite fast and strict responses from regulators and exchanges to market participants that pursue such practices. The existing methods in the industry for detecting fraudulent activities in securities market rely heavily on a set of rules based on expert knowledge. The securities market has deviated from its traditional form due to new technologies and changing investment strategies in the past few years. The current securities market demands scalable machine learning algorithms supporting identification of market manipulation activities. In this paper we use supervised learning algorithms to identify suspicious transactions in relation to market manipulation in stock market. We use a case study of manipulated stocks during 2003. We adopt CART, conditional inference trees, C5.0, Random Forest, Naive Bayes, Neural Networks, SVM and kNN for classification of manipulated samples. Empirical results show that Naive Bayes outperform other learning methods achieving F 2 measure of 53 (sensitivity and specificity are 89 and 83 respectively).",
"In financial markets, abnormal trading behaviors pose a serious challenge to market surveillance and risk management. What is worse, there is an increasing emergence of abnormal trading events that some experienced traders constitute a collusive clique and collaborate to manipulate some instruments, thus mislead other investors by applying similar trading behaviors for maximizing their personal benefits. In this paper, a method is proposed to detect the potential collusive cliques involved in an instrument of future markets by first calculating the correlation coefficient between any two eligible unified aggregated time series of signed order volume, and then combining the connected components from multiple sparsified weighted graphs constructed by using the correlation matrices where each correlation coefficient is over a user-specified threshold. Experiments conducted on real order data from the Shanghai Futures Exchange show that the proposed method can effectively detect suspect collusive cliques, which have been verified by financial experts. A tool based on the proposed method has been deployed in the exchange as a pilot application for futures market surveillance and risk management."
]
} |
1710.11194 | 2765533199 | The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in the recent years. Although genuinely collaborative platforms are far from being deployed in real-world scenarios, advances in control and perception algorithms have progressively popularized robots in manufacturing settings, where they work side by side with human peers to achieve shared tasks. Unfortunately, little progress has been made toward the development of systems that are proactive in their collaboration, and autonomously take care of some of the chores that compose most of the collaboration tasks. In this work, we present a collaborative system capable of assisting the human partner with a variety of supportive behaviors in spite of its limited perceptual and manipulation capabilities and incomplete model of the task. Our framework leverages information from a high-level, hierarchical model of the task. The model, that is shared between the human and robot, enables transparent synchronization between the peers and understanding of each other's plan. More precisely, we derive a partially observable Markov model from the high-level task representation. We then use an online solver to compute a robot policy, that is robust to unexpected observations such as inaccuracies of perception, failures in object manipulations, as well as discovers hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a furniture construction task. | To some extent, this approach builds on top of results in the field of task and motion planning (TAMP, see e.g. @cite_22 @cite_10 @cite_3 ). Indeed, similarly to we find approximate solutions to large POMDP problems through planning in belief space combined with just-in-time re-planning. Our work differs from traditional TAMP approaches in a number of ways: | {
"cite_N": [
"@cite_10",
"@cite_22",
"@cite_3"
],
"mid": [
"",
"2168359464",
"1883438135"
],
"abstract": [
"",
"In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable MDPs (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a POMDP. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions.",
"We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process POMDP with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity. Our method uses an offline point-based solver on a variable-resolution discretization of the state space to solve for a post-contact policy as a pre-computation step. Then, at runtime, we use an A* search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm achieves a successful grasp more quickly and with higher probability than a baseline QMDP policy on two different objects in simulation. Finally, we validate our simulation results on a real robot using commercially available tactile sensors."
]
} |
1710.11194 | 2765533199 | The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in the recent years. Although genuinely collaborative platforms are far from being deployed in real-world scenarios, advances in control and perception algorithms have progressively popularized robots in manufacturing settings, where they work side by side with human peers to achieve shared tasks. Unfortunately, little progress has been made toward the development of systems that are proactive in their collaboration, and autonomously take care of some of the chores that compose most of the collaboration tasks. In this work, we present a collaborative system capable of assisting the human partner with a variety of supportive behaviors in spite of its limited perceptual and manipulation capabilities and incomplete model of the task. Our framework leverages information from a high-level, hierarchical model of the task. The model, that is shared between the human and robot, enables transparent synchronization between the peers and understanding of each other's plan. More precisely, we derive a partially observable Markov model from the high-level task representation. We then use an online solver to compute a robot policy, that is robust to unexpected observations such as inaccuracies of perception, failures in object manipulations, as well as discovers hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a furniture construction task. | Planning techniques can enable human robot collaboration when a precise model of the task is known, and might adapt to hidden user preferences as demonstrated by @cite_1 . Similarly, partially observable models can provide robustness to unpredicted events and account for unobservable states. Of particular note is the work by which, similarly to the approach presented in this paper, uses a POMDP to model a collaborative task. Indeed, POMDPs and similar models (e.g. MOMDPs) have been shown to improve robot assistance @cite_16 and team efficiency @cite_27 in related works. Such models of the task are however generally expensive to build and require advanced technical knowledge. Hence, a significant body of work in the fields of human-robot collaboration and physical human-robot interaction focuses on how to best take over the human partner by learning parts of the task that are burdensome in terms of physical safety or cognitive load. Under this perspective, the majority of the research in the field has focused on frameworks for learning new skills from human demonstration (LfD, @cite_9 ), efficiently learn or model task representations @cite_21 @cite_12 @cite_30 @cite_26 , or interpreting the human partner's actions and social signals @cite_29 . | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_29",
"@cite_27",
"@cite_16",
"@cite_12"
],
"mid": [
"2295985460",
"1981446214",
"1684361744",
"2409715576",
"2114088572",
"2093313552",
"2295029210",
"2113096211",
"2419526527"
],
"abstract": [
"National Science Foundation (U.S.). Graduate Research Fellowship Program (grant number 2388357)",
"A significant challenge in developing planning systems for practical applications is the difficulty of acquiring the domain knowledge needed by such systems. One method for acquiring this knowledge is to learn it from plan traces, but this method typically requires a huge number of plan traces to converge. In this paper, we show that the problem with slow convergence can be circumvented by having the learner generate solution plans even before the planning domain is completely learned. Our empirical results show that these improvements reduce the size of the training set that is needed to find correct answers to a large percentage of planning problems in the test set.",
"Also referred to as learning by imitation, tutelage, or apprenticeship learning, Programming by Demonstration (PbD) develops methods by which new skills can be transmitted to a robot. This book examines methods by which robots learn new skills through human guidance. Taking a practical perspective, it covers a broad range of applications, including service robots. The text addresses the challenges involved in investigating methods by which PbD is used to provide robots with a generic and adaptive model of control. Drawing on findings from robot control, human-robot interaction, applied machine learning, artificial intelligence, and developmental and cognitive psychology, the book contains a large set of didactic and illustrative examples. Practical and comprehensive machine learning source codes are available on the books companion website: http: www.programming-by-demonstration.org",
"Collaboration between humans and robots requires solutions to an array of challenging problems, including multi-agent planning, state estimation, and goal inference. There already exist feasible solutions for many of these challenges, but they depend upon having rich task models. In this work we detail a novel type of Hierarchical Task Network we call a Clique Chain HTN (CC-HTN), alongside an algorithm for autonomously constructing them from topological properties derived from graphical task representations. As the presented method relies on the structure of the task itself, our work imposes no particular type of symbolic insight into motor primitives or environmental representation, making it applicable to a wide variety of use cases critical to human-robot interaction. We present evaluations within a multi-resolution goal inference task and a transfer learning application showing the utility of our approach.",
"The use of autonomous, mobile professional service robots in diverse workplaces is expected to grow substantially over the next decade. These robots often will work side by side with people, collaborating with employees on tasks. Some roboticists have argued that, in these cases, people will collaborate more naturally and easily with humanoid robots as compared with machine-like robots. It is also speculated that people will rely on and share responsibility more readily with robots that are in a position of authority. This study sought to clarify the effects of robot appearance and relative status on human-robot collaboration by investigating the extent to which people relied on and ceded responsibility to a robot coworker. In this study, a 3 × 3 experiment was conducted with human likeness (human, human-like robot, and machine-like robot) and status (subordinate, peer, and supervisor) as dimensions. As far as we know, this study is one of the first experiments examining how people respond to robotic coworkers. As such, this study attempts to design a robust and transferable sorting and assembly task that capitalizes on the types of tasks robots are expected to do and is embedded in a realistic scenario in which the participant and confederate are interdependent. The results show that participants retained more responsibility for the successful completion of the task when working with a machine-like as compared with a humanoid robot, especially when the machine-like robot was subordinate. These findings suggest that humanoid robots may be appropriate for settings in which people have to delegate responsibility to these robots or when the task is too demanding for people to do, and when complacency is not a major concern. Machine-like robots, however, may be more appropriate when robots are expected to be unreliable, are less well-equipped for the task than people are, or in other situations in which personal responsibility should be emphasized.",
"This paper presents an algorithm to bootstrap shared understanding in a human-robot interaction scenario where the user teaches a robot a new task using teaching instructions yet unknown to it. In such cases, the robot needs to estimate simultaneously what the task is and the associated meaning of instructions received from the user. For this work, we consider a scenario where a human teacher uses initially unknown spoken words, whose associated unknown meaning is either a feedback (good bad) or a guidance (go left, right, ...). We present computational results, within an inverse reinforcement learning framework, showing that a) it is possible to learn the meaning of unknown and noisy teaching instructions, as well as a new task at the same time, b) it is possible to reuse the acquired knowledge about instructions for learning new tasks, and c) even if the robot initially knows some of the instructions' meanings, the use of extra unknown teaching instructions improves learning efficiency.",
"We design and evaluate a method of human-robot cross-training, a validated and widely used strategy for the effective training of human teams. Cross-training is an interactive planning method in which team members iteratively switch roles with one another to learn a shared plan for the performance of a collaborative task. We first present a computational formulation of the robot mental model, which encodes the sequence of robot actions necessary for task completion and the expectations of the robot for preferred human actions, and show that the robot model is quantitatively comparable to the mental model that captures the inter-role knowledge held by the human. Additionally, we propose a quantitative measure of robot mental model convergence and an objective metric of model similarity. Based on this encoding, we formulate a human-robot cross-training method and evaluate its efficacy through experiments involving human subjects n = 60 . We compare human-robot cross-training to standard reinforcement learning techniques, and show that cross-training yields statistically significant improvements in quantitative team performance measures, as well as significant differences in perceived robot performance and human trust. Finally, we discuss the objective measure of robot mental model convergence as a method to dynamically assess human errors. This study supports the hypothesis that the effective and fluent teaming of a human and a robot may best be achieved by modeling known, effective human teamwork practices.",
"This paper presents a real-time vision-based system to assist a person with dementia wash their hands. The system uses only video inputs, and assistance is given as either verbal or visual prompts, or through the enlistment of a human caregiver's help. The system combines a Bayesian sequential estimation framework for tracking hands and towel, with a decision-theoretic framework for computing policies of action. The decision making system is a partially observable Markov decision process, or POMDP. Decision policies dictating system actions are computed in the POMDP using a point-based approximate solution technique. The tracking and decision making systems are coupled using a heuristic method for temporally segmenting the input video stream based on the continuity of the belief state. A key element of the system is the ability to estimate and adapt to user psychological states, such as awareness and responsiveness. We evaluate the system in three ways. First, we evaluate the hand-tracking system by comparing its outputs to manual annotations and to a simple hand-detection method. Second, we test the POMDP solution methods in simulation, and show that our policies have higher expected return than five other heuristic methods. Third, we report results from a ten-week trial with seven persons moderate-to-severe dementia in a long-term care facility in Toronto, Canada. The subjects washed their hands once a day, with assistance given by our automated system, or by a human caregiver, in alternating two-week periods. We give two detailed case study analyses of the system working during trials, and then show agreement between the system and independent human raters of the same trials.",
"In human-robot collaboration, multi-agent domains, or single-robot manipulation with multiple end-effectors, the activities of the involved parties are naturally concurrent. Such domains are also naturally relational as they involve objects, multiple agents, and models should generalize over objects and agents. We propose a novel formalization of relational concurrent activity processes that allows us to transfer methods from standard relational MDPs, such as Monte-Carlo planning and learning from demonstration, to concurrent cooperation domains. We formally compare the formulation to previous propositional models of concurrent decision making and demonstrate planning and learning from demonstration methods on a real-world human-robot assembly task."
]
} |
1710.11194 | 2765533199 | The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in the recent years. Although genuinely collaborative platforms are far from being deployed in real-world scenarios, advances in control and perception algorithms have progressively popularized robots in manufacturing settings, where they work side by side with human peers to achieve shared tasks. Unfortunately, little progress has been made toward the development of systems that are proactive in their collaboration, and autonomously take care of some of the chores that compose most of the collaboration tasks. In this work, we present a collaborative system capable of assisting the human partner with a variety of supportive behaviors in spite of its limited perceptual and manipulation capabilities and incomplete model of the task. Our framework leverages information from a high-level, hierarchical model of the task. The model, that is shared between the human and robot, enables transparent synchronization between the peers and understanding of each other's plan. More precisely, we derive a partially observable Markov model from the high-level task representation. We then use an online solver to compute a robot policy, that is robust to unexpected observations such as inaccuracies of perception, failures in object manipulations, as well as discovers hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a furniture construction task. | No matter how efficient such models are at exhibiting the intended behavior, they are often limited to simple tasks and are not transparent to the human peer. Indeed, evidences from the study of human-human interactions have demonstrated the importance of sharing mental task models to improve the efficiency of the collaboration @cite_0 . Similarly, studies on human-robot interactions show that an autonomous robot with a model of the task shared with a human peer can decrease the idle time for the human during the collaboration @cite_28 . Without enabling the robot to learn the task, other approaches have demonstrated the essential capability for collaborative robots to dynamically adapt their plans with respect to the task in order to accommodate for human's actions or unforeseen events @cite_18 . Likewise, rich tasks models can also enable the optimization of the decision with respect to extrinsic metrics such as risk on the human @cite_19 or completion time @cite_5 . | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_0",
"@cite_19",
"@cite_5"
],
"mid": [
"",
"2004669996",
"2002014347",
"2065804258",
"2738967685"
],
"abstract": [
"",
"We describe the design and evaluation of Chaski, a robot plan execution system that uses insights from human-human teaming to make human-robot teaming more natural and fluid. Chaski is a task-level executive that enables a robot to collaboratively execute a shared plan with a person. The system chooses and schedules the robot's actions, adapts to the human partner, and acts to minimize the human's idle time. We evaluate Chaski in human subject experiments in which a person works with a mobile and dexterous robot to collaboratively assemble structures using building blocks. We measure team performance outcomes for robots controlled by Chaski compared to robots that are verbally commanded, step-by-step by the human teammate. We show that Chaski reduces the human's idle time by 85 , a statistically significant difference. This result supports the hypothesis that human-robot team performance is improved when a robot emulates the effective coordination behaviors observed in human teams.",
"Objective: We conducted an empirical analysis of human teamwork to investigate the ways teammates incorporate coordination behaviors, including verbal and nonverbal cues, into their action planning. Background: In space, military, aviation, and medical industries, teams of people effectively coordinate to perform complex tasks under stress induced by uncertainty, ambiguity, and time pressure. As robots increasingly are introduced into these domains, we seek to understand effective human-team coordination to inform natural and effective human-robot coordination. Method: We conducted teamwork experiments in which teams of two people performed a complex task, involving ordering, timing, and resource constraints. Half the teams performed under time pressure, and half performed without time pressure. We cataloged the coordination behaviors used by each team and analyzed the speed of response and specificity of each coordination behavior. Results: Analysis shows that teammates respond to explicit cues, includin...",
"A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we describe a model for human-robot joint action, and propose an adaptive action selection mechanism for a robotic teammate, which makes anticipatory decisions based on the confidence of their validity and their relative risk. We conduct an analysis of our method, predicting an improvement in task efficiency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team's fluency and success. By way of explanation, we raise a number of fluency metric hypotheses, and evaluate their significance between the two study conditions.",
"Collaborative robots represent a clear added value to manufacturing, as they promise to increase productivity and improve working conditions of such environments. Although modern robotic systems have become safe and reliable enough to operate close to human workers on a day-to-day basis, the workload is still skewed in favor of a limited contribution from the robot's side, and a significant cognitive load is allotted to the human. We believe the transition from robots as recipients of human instruction to robots as capable collaborators hinges around the implementation of transparent systems, where mental models about the task are shared between peers, and the human partner is freed from the responsibility of taking care of both actors. In this work, we implement a transparent task planner able to be deployed in realistic, near-future applications. The proposed framework is capable of basic reasoning capabilities for what concerns role assignment and task allocation, and it interfaces with the human partner at the level of abstraction he is most comfortable with. The system is readily available to non-expert users, and programmable with high-level commands in an intuitive interface. Our results demonstrate an overall improvement in terms of completion time, as well as a reduced cognitive load for the human partner."
]
} |
1710.11194 | 2765533199 | The field of Human-Robot Collaboration (HRC) has seen a considerable amount of progress in the recent years. Although genuinely collaborative platforms are far from being deployed in real-world scenarios, advances in control and perception algorithms have progressively popularized robots in manufacturing settings, where they work side by side with human peers to achieve shared tasks. Unfortunately, little progress has been made toward the development of systems that are proactive in their collaboration, and autonomously take care of some of the chores that compose most of the collaboration tasks. In this work, we present a collaborative system capable of assisting the human partner with a variety of supportive behaviors in spite of its limited perceptual and manipulation capabilities and incomplete model of the task. Our framework leverages information from a high-level, hierarchical model of the task. The model, that is shared between the human and robot, enables transparent synchronization between the peers and understanding of each other's plan. More precisely, we derive a partially observable Markov model from the high-level task representation. We then use an online solver to compute a robot policy, that is robust to unexpected observations such as inaccuracies of perception, failures in object manipulations, as well as discovers hidden user preferences. We demonstrate that the system is capable of robustly providing support to the human in a furniture construction task. | Our paper is positioned within this growing body of work related to task representations in HRC. Unfortunately, little attention has been given to the issue of explicitly tackling the problem of effectively supporting the human partner. To our knowledge, is the only work that goes in this direction. It presents an algorithm to generate supportive behaviors during collaborative activity, although its results in simulation fall short in terms of providing practical demonstrations of the technique. On the other side of the spectrum, a number of works cited above achieve to a certain amount supportive behaviors without explicitly targeting them @cite_19 @cite_28 @cite_20 @cite_12 . A limitation of these approaches is that, as mentioned previously, they rely on exact task knowledge that is not always available for complex tasks in practical applications. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_12",
"@cite_20"
],
"mid": [
"2004669996",
"2065804258",
"2419526527",
"2212535562"
],
"abstract": [
"We describe the design and evaluation of Chaski, a robot plan execution system that uses insights from human-human teaming to make human-robot teaming more natural and fluid. Chaski is a task-level executive that enables a robot to collaboratively execute a shared plan with a person. The system chooses and schedules the robot's actions, adapts to the human partner, and acts to minimize the human's idle time. We evaluate Chaski in human subject experiments in which a person works with a mobile and dexterous robot to collaboratively assemble structures using building blocks. We measure team performance outcomes for robots controlled by Chaski compared to robots that are verbally commanded, step-by-step by the human teammate. We show that Chaski reduces the human's idle time by 85 , a statistically significant difference. This result supports the hypothesis that human-robot team performance is improved when a robot emulates the effective coordination behaviors observed in human teams.",
"A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we describe a model for human-robot joint action, and propose an adaptive action selection mechanism for a robotic teammate, which makes anticipatory decisions based on the confidence of their validity and their relative risk. We conduct an analysis of our method, predicting an improvement in task efficiency compared to a purely reactive process. We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team's fluency and success. By way of explanation, we raise a number of fluency metric hypotheses, and evaluate their significance between the two study conditions.",
"In human-robot collaboration, multi-agent domains, or single-robot manipulation with multiple end-effectors, the activities of the involved parties are naturally concurrent. Such domains are also naturally relational as they involve objects, multiple agents, and models should generalize over objects and agents. We propose a novel formalization of relational concurrent activity processes that allows us to transfer methods from standard relational MDPs, such as Monte-Carlo planning and learning from demonstration, to concurrent cooperation domains. We formally compare the formulation to previous propositional models of concurrent decision making and demonstrate planning and learning from demonstration methods on a real-world human-robot assembly task.",
"In this work, we present an algorithm for improving collaborator performance on sequential manipulation tasks. Our agent-decoupled, optimization-based, task and motion planning approach merges considerations derived from both symbolic and geometric planning domains. This results in the generation of supportive behaviors enabling a teammate to reduce cognitive and kinematic burdens during task completion. We describe our algorithm alongside representative use cases, with an evaluation based on solving complex circuit building problems. We conclude with a discussion of applications and extensions to human-robot teaming scenarios."
]
} |
1710.10723 | 2765390718 | We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual paragraphs. We sample multiple paragraphs from the documents during training, and use a shared-normalization training objective that encourages the model to produce globally correct output. We combine this method with a state-of-the-art pipeline for training models on document QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | The state of the art in reading comprehension has been rapidly advanced by neural models, in no small part due to the introduction of many large datasets. The first large scale datasets for training neural reading comprehension models used a Cloze-style task, where systems must predict a held out word from a piece of text @cite_3 @cite_24 . Additional datasets including SQuAD @cite_23 , WikiReading @cite_18 , MS Marco @cite_11 and TriviaQA @cite_25 provided more realistic questions. Another dataset of trivia questions, Quasar-T @cite_21 , was introduced recently that uses ClueWeb09 @cite_12 as its source for documents. In this work we choose to focus on SQuAD and TriviaQA. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2950663335",
"2734823783",
"2949615363",
"",
"2427527485",
"2612431505",
"",
"2951534261"
],
"abstract": [
"We present WikiReading, a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs). We compare various state-of-the-art DNN-based architectures for document classification, information extraction, and question answering. We find that models supporting a rich answer space, such as word or character sequences, perform best. Our best-performing model, a word-level sequence to sequence model with a mechanism to copy out-of-vocabulary words, obtains an accuracy of 71.8 .",
"We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and documents from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neural models, and show that these lag behind human performance by 16.4 and 32.1 for Quasar-S and -T respectively. The datasets are available at this https URL .",
"Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.",
"",
"We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0 , a significant improvement over a simple baseline (20 ). However, human performance (86.8 ) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL",
"We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23 and 40 vs. 80 ), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- this http URL",
"",
"We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models."
]
} |
1710.10723 | 2765390718 | We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual paragraphs. We sample multiple paragraphs from the documents during training, and use a shared-normalization training objective that encourages the model to produce globally correct output. We combine this method with a state-of-the-art pipeline for training models on document QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of TriviaQA, a large improvement from the 56.7 F1 of the previous best system. | Open question answering has been the subject of much research, especially spurred by the TREC question answering track @cite_15 . Knowledge bases can be used, such as in @cite_0 , although the resulting systems are limited by the quality of the knowledge base. Systems that try to answer questions using natural language resources such as YodaQA @cite_17 typically use pipelined methods to retrieve related text, build answer candidates, and pick a final output. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_17"
],
"mid": [
"2252136820",
"2086511124",
"2607739056"
],
"abstract": [
"In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.",
"A method for preparing silica-containing olefin polymerization catalysts, and the process performable therewith, the preparation involving adding an alkali met al silicate to an acid under defined conditions of addition to produce a hydrogel, recovering the gel in the substantially dry condition by employment of an oxygenated organic compound and impregnating the gel with a chromium compound.",
"This is a preprint, submitted on 2015-03-22. Question Answering as a sub-field of information retrieval and information extraction is recently enjoying renewed pop- ularity, triggered by the publicized success of IBM Watson in the Jeopardy! competition. But Question Answering re- search is now proceeding in several semi-independent tiers depending on the precise task formulation and constraints on the knowledge base, and new researchers entering the field can focus only on various restricted sub-tasks as no modern full-scale software system for QA has been openly available until recently. By our YodaQA system that we introduce here, we seek to re- unite and boost research efforts in Question Answering, pro- viding a modular, open source pipeline for this task — allow- ing integration of various knowledge base paradigms, an- swer production and analysis strategies and using a machine learned models to rank the answers. Within this pipeline, we also supply a baseline QA system inspired by DeepQA with solid performance and propose a reference experimen- tal setup for easy future performance comparisons. In this paper, we review the available open QA platforms, present the architecture of our pipeline, the components of the baseline QA system, and also analyze the system perfor- mance on the reference dataset."
]
} |
1710.10577 | 2765787895 | Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover representation flaws caused by potential dataset bias. More specifically, when the CNN is trained to estimate image attributes, we mine latent relationships between representations of different attributes inside the CNN. Then, we compare the mined attribute relationships with ground-truth attribute relationships to discover the CNN's blind spots and failure modes due to dataset bias. In fact, representation flaws caused by dataset bias cannot be examined by conventional evaluation strategies based on testing images, because testing images may also have a similar bias. Experiments have demonstrated the effectiveness of our method. | Given a feature map produced by a CNN, Dosovitskiy @cite_25 trained a new up-convolutional network to invert the feature map to the original image. Similarly, this approach was not designed for the visualization of a single attribute output. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2273348943"
],
"abstract": [
"Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities."
]
} |
1710.10710 | 2766993077 | Deep Learning methods usually require huge amounts of training data to perform at their full potential, and often require expensive manual labeling. Using synthetic images is therefore very attractive to train object detectors, as the labeling comes for free, and several approaches have been proposed to combine synthetic and real images for training. In this paper, we show that a simple trick is sufficient to train very effectively modern object detectors with synthetic images only: We freeze the layers responsible for feature extraction to generic layers pre-trained on real images, and train only the remaining layers with plain OpenGL rendering. Our experiments with very recent deep architectures for object recognition (Faster-RCNN, R-FCN, Mask-RCNN) and image feature extractors (InceptionResnet and Resnet) show this simple approach performs surprisingly well. | Mixing real and synthetic data to improve detection performance is a well established process. Many approaches such as @cite_16 @cite_29 @cite_7 , to mention only very recent ones, have shown the usefulness of adding synthetic data when real data is limited. In contrast to @cite_16 @cite_29 which use real masked image patches, @cite_7 uses 3D CAD models and a structure-preserving deformation pipeline to generate new synthetic models to prevent overfitting. However, while these approaches obtain better results compared to detectors trained on real data only, they still require real data. | {
"cite_N": [
"@cite_29",
"@cite_16",
"@cite_7"
],
"mid": [
"2963231598",
"2744438518",
"1591870335"
],
"abstract": [
"",
"A major impediment in rapidly deploying object detection models for instance detection is the lack of large annotated datasets. For example, finding a large labeled dataset containing instances in a particular kitchen is unlikely. Each new environment with new instances requires expensive data collection and annotation. In this paper, we propose a simple approach to generate large annotated instance datasets with minimal effort. Our key insight is that ensuring only patch-level realism provides enough training signal for current object detector models. We automatically cut' object instances and paste' them on random backgrounds. A naive way to do this results in pixel artifacts which result in poor performance for trained models. We show how to make detectors ignore these artifacts during training and generate data that gives competitive performance on real data. Our method outperforms existing synthesis approaches and when combined with real images improves relative performance by more than 21 on benchmark datasets. In a cross-domain setting, our synthetic data combined with just 10 real data outperforms models trained on all real data.",
"Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark."
]
} |
1710.10710 | 2766993077 | Deep Learning methods usually require huge amounts of training data to perform at their full potential, and often require expensive manual labeling. Using synthetic images is therefore very attractive to train object detectors, as the labeling comes for free, and several approaches have been proposed to combine synthetic and real images for training. In this paper, we show that a simple trick is sufficient to train very effectively modern object detectors with synthetic images only: We freeze the layers responsible for feature extraction to generic layers pre-trained on real images, and train only the remaining layers with plain OpenGL rendering. Our experiments with very recent deep architectures for object recognition (Faster-RCNN, R-FCN, Mask-RCNN) and image feature extractors (InceptionResnet and Resnet) show this simple approach performs surprisingly well. | To address this, a new line of work @cite_16 @cite_29 @cite_26 moves away from graphics based renderings to composing real images. The underlying theme is to paste masked patches of objects into real images, and thus reducing the dependence on graphics renderings. This approach has the advantage that the images of the objects are already in the right domain---the domain of real images---and thus, the domain gap between image compositions and real images is smaller than the one of graphics based rendering and real images. While this has shown quite some success, the amount of data is still restricted to the number of images taken from the object in the data gathering step and therefore does not allow to come up with new views of the object. Furthermore, it is not possible to generate new illumination settings or proper occlusions since shape and depth are usually not available. In addition, this approach is dependent on segmenting out the object from the background which is prone to segmentation errors when generating the object masks. | {
"cite_N": [
"@cite_29",
"@cite_16",
"@cite_26"
],
"mid": [
"2963231598",
"2744438518",
"2604236302"
],
"abstract": [
"",
"A major impediment in rapidly deploying object detection models for instance detection is the lack of large annotated datasets. For example, finding a large labeled dataset containing instances in a particular kitchen is unlikely. Each new environment with new instances requires expensive data collection and annotation. In this paper, we propose a simple approach to generate large annotated instance datasets with minimal effort. Our key insight is that ensuring only patch-level realism provides enough training signal for current object detector models. We automatically cut' object instances and paste' them on random backgrounds. A naive way to do this results in pixel artifacts which result in poor performance for trained models. We show how to make detectors ignore these artifacts during training and generate data that gives competitive performance on real data. Our method outperforms existing synthesis approaches and when combined with real images improves relative performance by more than 21 on benchmark datasets. In a cross-domain setting, our synthetic data combined with just 10 real data outperforms models trained on all real data.",
"We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously."
]
} |
1710.10710 | 2766993077 | Deep Learning methods usually require huge amounts of training data to perform at their full potential, and often require expensive manual labeling. Using synthetic images is therefore very attractive to train object detectors, as the labeling comes for free, and several approaches have been proposed to combine synthetic and real images for training. In this paper, we show that a simple trick is sufficient to train very effectively modern object detectors with synthetic images only: We freeze the layers responsible for feature extraction to generic layers pre-trained on real images, and train only the remaining layers with plain OpenGL rendering. Our experiments with very recent deep architectures for object recognition (Faster-RCNN, R-FCN, Mask-RCNN) and image feature extractors (InceptionResnet and Resnet) show this simple approach performs surprisingly well. | Recently, several approaches @cite_21 @cite_15 tried to overcome the domain gap between real and synthetic data by using generative adversarial networks (GANs). This way they produced better results than training with real data. However, GANs are hard to train and up to now, they have mainly shown their usefulness on regression tasks and not on detection applications. | {
"cite_N": [
"@cite_15",
"@cite_21"
],
"mid": [
"2949212125",
"2567101557"
],
"abstract": [
"Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.",
"With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data."
]
} |
1710.10710 | 2766993077 | Deep Learning methods usually require huge amounts of training data to perform at their full potential, and often require expensive manual labeling. Using synthetic images is therefore very attractive to train object detectors, as the labeling comes for free, and several approaches have been proposed to combine synthetic and real images for training. In this paper, we show that a simple trick is sufficient to train very effectively modern object detectors with synthetic images only: We freeze the layers responsible for feature extraction to generic layers pre-trained on real images, and train only the remaining layers with plain OpenGL rendering. Our experiments with very recent deep architectures for object recognition (Faster-RCNN, R-FCN, Mask-RCNN) and image feature extractors (InceptionResnet and Resnet) show this simple approach performs surprisingly well. | Yet another approach is to rely on transfer learning @cite_4 @cite_8 @cite_12 , to exploit a large amount of available data in a source domain, here the domain of synthetic images, to correctly classify data from the target domain, here the domain of real images, for which the amount of training data is limited. This is typically done by tighting two predictors together, one trained on the source domain, the other on the target domain or by training a single predictor on the two domains. This is a general approach as the source and target domains can be very different, compared to synthetic and real images, which are more related to each other. In this paper, we exploit this relation by applying the same feature extractor to the two domains. However, in contrast to @cite_4 @cite_8 @cite_12 we do not need any real images of the objects of interest in our approach. Actually we need real background images: however we don't need real images containing the objects of interest. Most of the approaches, except for Rozantsev17, use shared weights though, so it's really only one predictor. with deep learning yes, but not for older papers. | {
"cite_N": [
"@cite_4",
"@cite_12",
"@cite_8"
],
"mid": [
"2312004824",
"",
"2953127297"
],
"abstract": [
"The performance of a classifier trained on data coming from a specific domain typically degrades when applied to a related but different one. While annotating many samples from the new domain would address this issue, it is often too expensive or impractical. Domain Adaptation has therefore emerged as a solution to this problem; It leverages annotated data from a source domain, in which it is abundant, to train a classifier to operate in a target domain, in which it is either sparse or even lacking altogether. In this context, the recent trend consists of learning deep architectures whose weights are shared for both domains, which essentially amounts to learning domain invariant features. Here, we show that it is more effective to explicitly model the shift from one domain to the other. To this end, we introduce a two-stream architecture, where one operates in the source domain and the other in the target domain. In contrast to other approaches, the weights in corresponding layers are related but not shared . We demonstrate that this both yields higher accuracy than state-of-the-art methods on several object recognition and detection tasks and consistently outperforms networks with shared weights in both supervised and unsupervised settings.",
"",
"The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process."
]
} |
1710.10545 | 2765191736 | We study monotonicity testing of Boolean functions over the hypergrid @math and design a non-adaptive tester with @math -sided error whose query complexity is @math . Previous to our work, the best known testers had query complexity linear in @math but independent of @math . We improve upon these testers as long as @math . To obtain our results, we work with what we call the augmented hypergrid, which adds extra edges to the hypergrid. Our main technical contribution is a Margulis-style isoperimetric result for the augmented hypergrid, and our tester, like previous testers for the hypercube domain, performs directed random walks on this structure. | In property testing, the notion of distance between functions is usually the Hamming distance between them, that is, the fraction of points at which they differ. More generally one can think of a general measure over the domain and the distance is the measure of the points at which the two functions differ. Monotonicity testing has been studied @cite_23 @cite_11 @cite_9 over general product measures. It is now known @cite_9 that for functions over @math , there exist testers making @math -queries over any product distribution; in fact there exist better testers if the distribution is known. A simple argument (Claim 3.6 in @cite_9 ) shows that testing monotonicity of Boolean functions over @math over any product distribution reduces to testing over @math over the uniform distribution. Thus our result gives @math -query monotonicity testers for @math , even over @math -biased distributions; this holds even when @math 's are not constants and depend on @math . Once again, it is not clear how to generalize the tester of Khot, Minzer, and Safra @cite_27 to obtain such a result. | {
"cite_N": [
"@cite_27",
"@cite_9",
"@cite_23",
"@cite_11"
],
"mid": [
"2904542722",
"1775912700",
"1986542077",
""
],
"abstract": [
"We show a directed and robust analogue of a boolean isoperimetric-type theorem of Talagrand [Geom. Funct. Anal., 3 (1993), pp. 295--314]. As an application, we give a monotonicity testing algorithm that makes @math nonadaptive queries to a function @math , always accepts a monotone function, and rejects a function that is @math -far from being monotone with constant probability.",
"The primary problem in property testing is to decide whether a given function satisfies a certain property or is far from any function satisfying it. This crucially requires a notion of distance between functions. The most prevalent notion is the Hamming distance over the uniform distribution on the domain. This restriction to uniformity is rather limiting, and it is important to investigate distances induced by more general distributions. In this article, we provide simple and optimal testers for bounded derivative properties over arbitrary product distributions. Bounded derivative properties include fundamental properties, such as monotonicity and Lipschitz continuity. Our results subsume almost all known results (upper and lower bounds) on monotonicity and Lipschitz testing over arbitrary ranges. We prove an intimate connection between bounded derivative property testing and binary search trees (BSTs). We exhibit a tester whose query complexity is the sum of expected depths of optimal BSTs for each marginal. Furthermore, we show that this sum-of-depths is also a lower bound. A technical contribution of our work is an optimal dimension reduction theorem for all bounded derivative properties that relates the distance of a function from the property to the distance of restrictions of the function to random lines. Such a theorem has been elusive even for monotonicity, and our theorem is an exponential improvement to the previous best-known result.",
"In property testing, we are given oracle access to a function f, and we wish to test if the function satisfies a given property P, or it is e-far from having that property. In a more general setting, the domain on which the function is defined is equipped with a probability distribution, which assigns different weight to different elements in the domain. This paper relates the complexity of testing the monotonicity of a function over the d-dimensional cube to the Shannon entropy of the underlying distribution. We provide an improved upper bound on the query complexity of the property tester.",
""
]
} |
1710.10776 | 2766611722 | Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task. The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices. Neural Architecture Search (NAS) is a Reinforcement Learning approach that has been proposed to automate architecture design. NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures. However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures. This procedure needs to be executed from scratch for each new task. The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks. In this paper, we present the Multitask Neural Model Search (MNMS) controller. Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks. We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task. We then show that pre-trained MNMS controllers can transfer learning to new tasks. By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models. | Our work also draws on prior research in transfer learning and simultaneous multitask training. Transfer learning has been shown to achieve excellent results as an initialization method for deep networks, including for models trained using RL . Simultaneous multitask training can also facilitate learning between tasks with a common structure, though effectively retaining knowledge across tasks is still an active area of research @cite_1 @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2735995851",
"2113207845"
],
"abstract": [
"Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (Distill & transfer learning). Instead of sharing parameters between the different workers, we propose to share a \"distilled\" policy that captures common behaviour across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning.",
"Many computer vision algorithms depend on configuration settings that are typically hand-tuned in the course of evaluating the algorithm for a particular data set. While such parameter tuning is often presented as being incidental to the algorithm, correctly setting these parameter choices is frequently critical to realizing a method's full potential. Compounding matters, these parameters often must be re-tuned when the algorithm is applied to a new problem domain, and the tuning process itself often depends on personal experience and intuition in ways that are hard to quantify or describe. Since the performance of a given technique depends on both the fundamental quality of the algorithm and the details of its tuning, it is sometimes difficult to know whether a given technique is genuinely better, or simply better tuned. In this work, we propose a meta-modeling approach to support automated hyperparameter optimization, with the goal of providing practical tools that replace hand-tuning with a reproducible and unbiased optimization process. Our approach is to expose the underlying expression graph of how a performance metric (e.g. classification accuracy on validation examples) is computed from hyperparameters that govern not only how individual processing steps are applied, but even which processing steps are included. A hyperparameter optimization algorithm transforms this graph into a program for optimizing that performance metric. Our approach yields state of the art results on three disparate computer vision problems: a face-matching verification task (LFW), a face identification task (PubFig83) and an object recognition task (CIFAR-10), using a single broad class of feed-forward vision architectures."
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | In the field of computer vision, significant efforts were exerted to visualize and understand how the components of a CNN work together to perform classifications. These studies (Zeiler @math Fergus @cite_42 , Dosovitskiy @math Brox @cite_21 ) provided researchers with insights of neurons' learned features and inspired designs of better network architectures (e.g., the state-of-the-art performance on the ImageNet benchmark in 2013 proposed by Zeiler @math Fergus @cite_42 ). | {
"cite_N": [
"@cite_42",
"@cite_21"
],
"mid": [
"2952186574",
"2273348943"
],
"abstract": [
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities."
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | Performance-based methods analyze model architectures by altering critical network components and examining the relative performance changes. @cite_7 conducted a comprehensive study of LSTM components. @cite_38 evaluated the performance difference between GRUs and LSTMs. @cite_29 conducted an automatic search among thousands of RNN architectures. These approaches, however, only show overall performance differences regarding certain architectural components, and provide little understanding of the contribution of inner mechanisms. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_7"
],
"mid": [
"1924770834",
"581956982",
"1689711448"
],
"abstract": [
"In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM.",
"The Recurrent Neural Network (RNN) is an extremely powerful sequence model that is often difficult to train. The Long Short-Term Memory (LSTM) is a specific RNN architecture whose design makes it much easier to train. While wildly successful in practice, the LSTM's architecture appears to be ad-hoc so it is not clear if it is optimal, and the significance of its individual components is unclear. In this work, we aim to determine whether the LSTM architecture is optimal or whether much better architectures exist. We conducted a thorough architecture search where we evaluated over ten thousand different RNN architectures, and identified an architecture that outperforms both the LSTM and the recently-introduced Gated Recurrent Unit (GRU) on some but not all tasks. We found that adding a bias of 1 to the LSTM's forget gate closes the gap between the LSTM and the GRU.",
"Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( @math years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment."
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | Another worth mentioned type of neural models extends RNN with an attention mechanism to improve the performance on specific tasks. @cite_9 applied the attention in machine translation and showed the relationship between source and target sentences. @cite_30 designed two attention-based models in image captioning, which revealed the reasons behind the effectiveness of their models. Although the attention mechanism can benefit the interpretation without extra effort, it requires jointly training different models or modifying the original model, which limits its application in general RNN models. | {
"cite_N": [
"@cite_30",
"@cite_9"
],
"mid": [
"2950178297",
"2133564696"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | On the one hand, visualization has been increasingly adopted by the machine learning community to analyze @cite_31 , debug @cite_14 , and present @cite_39 machine learning models. On the other hand, a number of human-in-the-loop methods have been proposed as competitive replacements of full-automatic machine learning methods. These methods include: visual classification @cite_19 @cite_23 @cite_37 , visual optimization @cite_10 @cite_27 , and visual feature engineering @cite_12 @cite_3 @cite_32 . | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_32",
"@cite_3",
"@cite_39",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_31",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"",
"",
"2186022498",
"",
"4709571",
"2127058057",
"2101474491",
"2512274390",
"",
"2728444372"
],
"abstract": [
"",
"",
"",
"Machine learning requires an effective combination of data, features, and algorithms. While many tools exist for working with machine learning data and algorithms, support for thinking of new features, or feature ideation, remains poor. In this paper, we investigate two general approaches to support feature ideation: visual summaries and sets of errors. We present FeatureInsight, an interactive visual analytics tool for building new dictionary features (semantically related groups of words) for text classification problems. FeatureInsight supports an error-driven feature ideation process and provides interactive visual summaries of sets of misclassified documents. We conducted a controlled experiment evaluating both visual summaries and sets of errors in FeatureInsight. Our results show that visual summaries significantly improve feature ideation, especially in combination with sets of errors. Users preferred visual summaries over viewing raw data, and only preferred examining sets when visual summaries were provided. We discuss extensions of both approaches to data types other than text, and point to areas for future research.",
"",
"",
"Machine learning is an increasingly used computational tool within human-computer interaction research. While most researchers currently utilize an iterative approach to refining classifier models and performance, we propose that ensemble classification techniques may be a viable and even preferable alternative. In ensemble learning, algorithms combine multiple classifiers to build one that is superior to its components. In this paper, we present EnsembleMatrix, an interactive visualization system that presents a graphical view of confusion matrices to help users understand relative merits of various classifiers. EnsembleMatrix allows users to directly interact with the visualizations in order to explore and build combination models. We evaluate the efficacy of the system and the approach in a user study. Results show that users are able to quickly combine multiple classifiers operating on multiple feature sets to produce an ensemble classifier with accuracy that approaches best-reported performance classifying images in the CalTech-101 dataset.",
"An alternative form to multidimensional projections for the visual analysis of data represented in multidimensional spaces is the deployment of similarity trees, such as Neighbor Joining trees. They organize data objects on the visual plane emphasizing their levels of similarity with high capability of detecting and separating groups and subgroups of objects. Besides this similarity-based hierarchical data organization, some of their advantages include the ability to decrease point clutter; high precision; and a consistent view of the data set during focusing, offering a very intuitive way to view the general structure of the data set as well as to drill down to groups and subgroups of interest. Disadvantages of similarity trees based on neighbor joining strategies include their computational cost and the presence of virtual nodes that utilize too much of the visual space. This paper presents a highly improved version of the similarity tree technique. The improvements in the technique are given by two procedures. The first is a strategy that replaces virtual nodes by promoting real leaf nodes to their place, saving large portions of space in the display and maintaining the expressiveness and precision of the technique. The second improvement is an implementation that significantly accelerates the algorithm, impacting its use for larger data sets. We also illustrate the applicability of the technique in visual data mining, showing its advantages to support visual classification of data sets, with special attention to the case of image classification. We demonstrate the capabilities of the tree for analysis and iterative manipulation and employ those capabilities to support evolving to a satisfactory data organization and classification.",
"Performance analysis is critical in applied machine learning because it influences the models practitioners produce. Current performance analysis tools suffer from issues including obscuring important characteristics of model behavior and dissociating performance from data. In this work, we present Squares, a performance visualization for multiclass classification problems. Squares supports estimating common performance metrics while displaying instance-level distribution information necessary for helping practitioners prioritize efforts and access data. Our controlled study shows that practitioners can assess performance significantly faster and more accurately with Squares than a confusion matrix, a common performance analysis tool in machine learning.",
"",
"One main task for domain experts in analysing their nD data is to detect and interpret class cluster separations and outliers. In fact, an important question is, which features dimensions separate classes best or allow a cluster-based data classification. Common approaches rely on projections from nD to 2D, which comes with some challenges, such as: The space of projection contains an infinite number of items. How to find the right one? The projection approaches suffers from distortions and misleading effects. How to rely to the projected class cluster separation? The projections involve the complete set of dimensions features. How to identify irrelevant dimensions? Thus, to address these challenges, we introduce a visual analytics concept for the feature selection based on linear discriminative star coordinates DSC, which generate optimal cluster separating views in a linear sense for both labeled and unlabeled data. This way the user is able to explore how each dimension contributes to clustering. To support to explore relations between clusters and data dimensions, we provide a set of cluster-aware interactions allowing to smartly iterate through subspaces of both records and features in a guided manner. We demonstrate our features selection approach for optimal cluster class separation analysis with a couple of experiments on real-life benchmark high-dimensional data sets."
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | In the field of deep learning, some recent studies have utilized visualization to help understand RNNs. @cite_33 studied the behavior of LSTM and GRU in speech recognition by projecting sequence history. @cite_25 showed that certain cell states can track long-range dependencies by overlaying heat map on texts. @cite_43 also used heat maps to examine sensitiveness of different RNNs to words in a sentence. However, their visualizations only provided an overall analysis of RNNs. These studies did not explore RNN's hidden states in detail. | {
"cite_N": [
"@cite_43",
"@cite_25",
"@cite_33"
],
"mid": [
"1601924930",
"1951216520",
"2953188482"
],
"abstract": [
"While neural networks have been successfully applied to many NLP tasks the resulting vector-based models are very difficult to interpret. For example it's not clear how they achieve compositionality , building sentence meaning from the meanings of words and phrases. In this paper we describe four strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allow us to see well-known markedness asymmetries in negation. We then introduce three simple and straightforward methods for visualizing a unit's salience , the amount it contributes to the final composed meaning: (1) gradient back-propagation, (2) the variance of a token from the average word node, (3) LSTM-style gates that measure information flow. We test our methods on sentiment using simple recurrent nets and LSTMs. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks , and also shed light on why LSTMs outperform simple recurrent nets,",
"Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.",
"Recurrent neural networks (RNNs) have shown clear superiority in sequence modeling, particularly the ones with gated units, such as long short-term memory (LSTM) and gated recurrent unit (GRU). However, the dynamic properties behind the remarkable performance remain unclear in many applications, e.g., automatic speech recognition (ASR). This paper employs visualization techniques to study the behavior of LSTM and GRU when performing speech recognition tasks. Our experiments show some interesting patterns in the gated memory, and some of them have inspired simple yet effective modifications on the network structure. We report two of such modifications: (1) lazy cell update in LSTM, and (2) shortcut connections for residual learning. Both modifications lead to more comprehensible and powerful networks."
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | In the field of visualization, recent work has exhibited the effectiveness of visual analytics in understanding, diagnosing and presenting neural networks. @cite_40 treated deep CNN as a directed acyclic graph and built an interactive visual analytics system to analyze CNN models. @cite_6 applied dimensionality reduction to visualize learned representations, as well as the relationships among artificial neurons, and provided insightful visual feedback of artificial neural networks. While visualization has achieved considerable success on CNNs, little work has focused on RNNs. Most related to our work, @cite_41 has proposed an interactive visualization system to explore hidden state patterns similar to a given phrase on a dataset. This system also allows users to flexibly explore given dimensions of hidden states. However, the parallel coordinates design is not scalable for efficiently analyzing hundreds or thousands of hidden state dimensions. | {
"cite_N": [
"@cite_41",
"@cite_40",
"@cite_6"
],
"mid": [
"2962711575",
"2343061342",
""
],
"abstract": [
"Recurrent neural networks, and in particular long short-term memory networks (LSTMs), are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVis a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with domain specific structural annotations. We further show several use cases of the tool for analyzing specific hidden state properties on datasets containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis.",
"Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
""
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | We formulate the relation between hidden state units and discrete inputs of RNNs as bipartite graphs to investigate the structure of information stored hidden states. Co-clustering is a widely used method for analyzing bipartite graphs, which simultaneously clusters two kinds of entities in a graph @cite_1 . Some recent work combined co-clustering with visualization to assist intelligence analysis, where different types of entities are considered @cite_49 @cite_47 . A most recent work proposed by @cite_2 presented an interactive co-clustering visualization where cluster nodes are visualized as adjacency matrices or treemaps. Although both adjacency matrices and treemaps used in this visualization are well established, none could be adjusted to visualize abstract entities like hidden states. | {
"cite_N": [
"@cite_47",
"@cite_1",
"@cite_49",
"@cite_2"
],
"mid": [
"",
"2144544802",
"2001319512",
"2345428105"
],
"abstract": [
"",
"A large number of clustering approaches have been proposed for the analysis of gene expression data obtained from microarray experiments. However, the results from the application of standard clustering methods to genes are limited. This limitation is imposed by the existence of a number of experimental conditions where the activity of genes is uncorrelated. A similar limitation exists when clustering of conditions is performed. For this reason, a number of algorithms that perform simultaneous clustering on the row and column dimensions of the data matrix has been proposed. The goal is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this paper, we refer to this class of algorithms as biclustering. Biclustering is also referred in the literature as coclustering and direct clustering, among others names, and has also been used in fields such as information retrieval and data mining. In this comprehensive survey, we analyze a large number of existing approaches to biclustering, and classify them in accordance with the type of biclusters they can find, the patterns of biclusters that are discovered, the methods used to perform the search, the approaches used to evaluate the solution, and the target applications.",
"A prototype visual analytics tool uses data mining algorithms to find patterns in textual datasets and then supports exploration of these patterns in the form of biclusters on a high-resolution display.",
"A bipartite graph models the relation between two different types of entities. It is applicable, for example, to describe persons' affiliations to different social groups or their association with subjects such as topics of interest. In these applications, it is important to understand the connectivity patterns among the entities in the bipartite graph. For the example of a bipartite relation between persons and their topics of interest, people may form groups based on their common interests, and the topics also can be grouped or categorized based on the interested audiences. Co-clustering methods can identify such connectivity patterns and find clusters within the two types of entities simultaneously. In this paper, we propose an interactive visualization design that incorporates co-clustering methods to facilitate the identification of node clusters formed by their common connections in a bipartite graph. Besides highlighting the automatically detected node clusters and the connections among them, the visual interface also provides visual cues for evaluating the homogeneity of the bipartite connections in a cluster, identifying potential outliers, and analyzing the correlation of node attributes with the cluster structure. The interactive visual interface allows users to flexibly adjust the node grouping to incorporate their prior knowledge of the domain, either by direct manipulation (i.e., splitting and merging the clusters), or by providing explicit feedback on the cluster quality, based on which the system will learn a parametrization of the co-clustering algorithm to better align with the users' notion of node similarity. To demonstrate the utility of the system, we present two example usage scenarios on real world datasets."
]
} |
1710.10777 | 2766243150 | Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts. | Comparative visualization was adopted to fulfill the design requirements of RNNVis. @cite_44 suggested three typical strategies for comparative visualization, namely, juxtaposition (or separation), superposition (or overlay), and explicit encoding. We mainly employ juxtaposition and superposition for comparing RNNs at three different levels, namely, detail, sentence, and overview levels. The details of the design choices are discussed in sec:interaction . | {
"cite_N": [
"@cite_44"
],
"mid": [
"2030246490"
],
"abstract": [
"Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. IncreasingLy, information visuaLization tools support such comparisons explicitLy, beyond simply aLLowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a generaL understanding of comparative visualization and faciLitating the development of more comparative visualization tools."
]
} |
1710.10403 | 2767153224 | Functional transfer matrices consist of real functions with trainable parameters. In this work, functional transfer matrices are used to model functional connections in neural networks. Different from linear connections in conventional weight matrices, the functional connections can represent nonlinear relations between two neighbouring layers. Neural networks with the functional connections, which are called functional transfer neural networks, can be trained via back-propagation. On the two spirals problem, the functional transfer neural networks are able to show considerably better performance than conventional multi-layer perceptrons. On the MNIST handwritten digit recognition task, the performance of the functional transfer neural networks is comparable to that of the conventional model. This study has demonstrated that the functional transfer matrices are able to perform better than the conventional weight matrices in specific cases, so that they can be alternatives of the conventional ones. | Functional-link neural networks use functional-links to enhance input patterns @cite_14 . Usually, they consist of a functional expansion module, an input layer and an output layer, and the functional expansion module consists of many functional-links: For instance, a functional-link can be a trainable linear function (which is called a random variable functional-link), a multiplication function (which is called a generic basis), a trigonometric function or a Chebyshev polynomial basis function @cite_32 . A typical application of functional-link neural networks is to model nonlinear decision boundaries in channel equalisers @cite_26 . Dehuri and Cho @cite_9 also use them as classifiers, where functional-links are used to select input features. | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_32",
"@cite_26"
],
"mid": [
"1999194769",
"1996640396",
"2017838688",
"1984152361"
],
"abstract": [
"In this paper, an adequate set of input features is selected for functional expansion genetically for the purpose of solving the problem of classification in data mining using functional link neural network. The proposed method named as HFLNN aims to choose an optimal subset of input features by eliminating features with little or no predictive information and designs a more compact classifier. With an adequate set of basis functions, HFLNN overcomes the non-linearity of problems, which is a common phenomenon in single layer neural networks. The properties like simplicity of the architecture (i.e., no hidden layer) and the low computational complexity of the network (i.e., less number of weights to be learned) encourage us to use it in classification task of data mining. We present a mathematical analysis of the stability and convergence of the proposed method. Further the issue of statistical tests for comparison of algorithms on multiple datasets, which is even more essential in data mining studies, has been all but ignored. In this paper, we recommend a set of simple, yet safe, robust and non-parametric tests for statistical comparisons of the HFLNN with functional link neural network (FLNN) and radial basis function network (RBFN) classifiers over multiple datasets by an extensive set of simulation studies.",
"Abstract In this paper we explore and discuss the learning and generalization characteristics of the random vector version of the Functional-link net and compare these with those attainable with the GDR algorithm. This is done for a well-behaved deterministic function and for real-world data. It seems that ‘ overtraining ’ occurs for stochastic mappings. Otherwise there is saturation of training.",
"Functional link neural network (FLNN) is a class of higher order neural networks (HONs) and have gained extensive popularity in recent years. FLNN have been successfully used in many applications such as system identification, channel equalization, short-term electric-load forecasting, and some of the tasks of data mining. The goals of this paper are to: (1) provide readers who are novice to this area with a basis of understanding FLNN and a comprehensive survey, while offering specialists an updated picture of the depth and breadth of the theory and applications; (2) present a new hybrid learning scheme for Chebyshev functional link neural network (CFLNN); and (3) suggest possible remedies and guidelines for practical applications in data mining. We then validate the proposed learning scheme for CFLNN in classification by an extensive simulation study. Comprehensive performance comparisons with a number of existing methods are presented.",
"Nonlinear intersymbol interference (ISI) leads to significant error rate in nonlinear communication and digital storage channel. In this paper, therefore, a novel computationally efficient functional link neural network cascaded with Chebyshev orthogonal polynomial is proposed to combat nonlinear ISI. The equalizer has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomial and Chebyshev orthogonal polynomial. Due to the input pattern and nonlinear approximation enhancement, the proposed structure can approximate arbitrarily nonlinear decision boundaries. It has been utilized for nonlinear channel equalization. The performance of the proposed adaptive nonlinear equalizer is compared with functional link neural network (FLNN) equalizer, multilayer perceptron (MLP) network and radial basis function (RBF) along with conventional normalized least-mean-square algorithms (NLMS) for different linear and nonlinear channel models. The comparison of convergence rate, bit error rate (BER) and steady state error performance, and computational complexity involved for neural network equalizers is provided."
]
} |
1710.10403 | 2767153224 | Functional transfer matrices consist of real functions with trainable parameters. In this work, functional transfer matrices are used to model functional connections in neural networks. Different from linear connections in conventional weight matrices, the functional connections can represent nonlinear relations between two neighbouring layers. Neural networks with the functional connections, which are called functional transfer neural networks, can be trained via back-propagation. On the two spirals problem, the functional transfer neural networks are able to show considerably better performance than conventional multi-layer perceptrons. On the MNIST handwritten digit recognition task, the performance of the functional transfer neural networks is comparable to that of the conventional model. This study has demonstrated that the functional transfer matrices are able to perform better than the conventional weight matrices in specific cases, so that they can be alternatives of the conventional ones. | There are four main differences between our work and the above work about functional-link neural networks: Firstly, we use functional transfer matrices as alternatives to standard weight matrices, whereas functional expansion modules are NOT alternatives to weight matrices in functional-link neural networks. Secondly, our models have up to 10 hidden layers, and functional transfer matrices are applied to all of them, whereas functional-link neural networks have no hidden layer or have only one hidden layer with a linear weight matrix, and functional-links are only used to enhance input patterns @cite_9 @cite_14 . Thirdly, most functional transfer matrices are trainable, whereas most functional-links are fixed (except the random variable functional-link). Finally, functional transfer matrices are applied to hand-written digit recognition and the modelling of memory blocks, and they are different from the applications of functional-links. | {
"cite_N": [
"@cite_9",
"@cite_14"
],
"mid": [
"1999194769",
"1996640396"
],
"abstract": [
"In this paper, an adequate set of input features is selected for functional expansion genetically for the purpose of solving the problem of classification in data mining using functional link neural network. The proposed method named as HFLNN aims to choose an optimal subset of input features by eliminating features with little or no predictive information and designs a more compact classifier. With an adequate set of basis functions, HFLNN overcomes the non-linearity of problems, which is a common phenomenon in single layer neural networks. The properties like simplicity of the architecture (i.e., no hidden layer) and the low computational complexity of the network (i.e., less number of weights to be learned) encourage us to use it in classification task of data mining. We present a mathematical analysis of the stability and convergence of the proposed method. Further the issue of statistical tests for comparison of algorithms on multiple datasets, which is even more essential in data mining studies, has been all but ignored. In this paper, we recommend a set of simple, yet safe, robust and non-parametric tests for statistical comparisons of the HFLNN with functional link neural network (FLNN) and radial basis function network (RBFN) classifiers over multiple datasets by an extensive set of simulation studies.",
"Abstract In this paper we explore and discuss the learning and generalization characteristics of the random vector version of the Functional-link net and compare these with those attainable with the GDR algorithm. This is done for a well-behaved deterministic function and for real-world data. It seems that ‘ overtraining ’ occurs for stochastic mappings. Otherwise there is saturation of training."
]
} |
1710.10662 | 2766380578 | Abstract Methods from computational topology are becoming more and more popular in computer vision and have shown to improve the state-of-the-art in several tasks. In this paper, we investigate the applicability of topological descriptors in the context of 3D surface analysis for the classification of different surface textures. We present a comprehensive study on topological descriptors, investigate their robustness and expressiveness and compare them with state-of-the-art methods including Convolutional Neural Networks (CNNs). Results show that class-specific information is reflected well in topological descriptors. The investigated descriptors can directly compete with non-topological descriptors and capture complementary information. As a consequence they improve the state-of-the-art when combined with non-topological descriptors. | The analysis of 3D surface texture in image-space can be considered as texture analysis task on the depth map. Thus, approaches of image texture analysis become applicable. Popular methods for image texture analysis include histograms of vector quantized filter responses @cite_25 and later generalizations such as the bag-of-visual-words model for textures @cite_29 and the Fisher vector @cite_34 . Recently, deep learning-based approaches for texture analysis have been introduced for the problem of image texture analysis @cite_18 , which outperform many existing methods. | {
"cite_N": [
"@cite_29",
"@cite_34",
"@cite_18",
"@cite_25"
],
"mid": [
"1625255723",
"2147238549",
"819977924",
"1598614695"
],
"abstract": [
"We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information.",
"Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.",
"Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.",
"This paper presents an algorithm for detecting, localizing and grouping instances of repeated scene elements. The grouping is represented by a graph where nodes correspond to individual elements and arcs join spatially neighboring elements. Associated with each arc is an affine map that best transforms the image patch at one location to the other. The approach we propose consists of 4 steps: (1) detecting “interesting” elements in the image; (2) matching elements with their neighbors and estimating the affine transform between them; (3) growing the element to form a more distinctive unit; and (4) grouping the elements. The idea is analogous to tracking in dynamic imagery. In our context, we “track” an element to spatially neighboring locations in one image, while in temporal tracking, one would perform the search in neighboring image frames."
]
} |
1710.10662 | 2766380578 | Abstract Methods from computational topology are becoming more and more popular in computer vision and have shown to improve the state-of-the-art in several tasks. In this paper, we investigate the applicability of topological descriptors in the context of 3D surface analysis for the classification of different surface textures. We present a comprehensive study on topological descriptors, investigate their robustness and expressiveness and compare them with state-of-the-art methods including Convolutional Neural Networks (CNNs). Results show that class-specific information is reflected well in topological descriptors. The investigated descriptors can directly compete with non-topological descriptors and capture complementary information. As a consequence they improve the state-of-the-art when combined with non-topological descriptors. | Our work mainly builds upon the approach of In our study we investigate different topological descriptors, including PI, for the domain of surface texture analysis. The difference to the investigations in @cite_11 and @cite_9 is that we try to enrich the PI representation by pre-filtering, normalization, and feature selection and that we analyze in-depth the information captured by the PI descriptor, its redundancy and discriminativity. Furthermore, we investigate the sensitivity of the representation to its computational parameters. Beyond this we combine them with non-topological state-of-the-art descriptors to investigate synergy effects. | {
"cite_N": [
"@cite_9",
"@cite_11"
],
"mid": [
"2964237352",
"1960384938"
],
"abstract": [
"Many data sets can be viewed as a noisy sampling of an underlying space, and tools from topological data analysis can characterize this structure for the purpose of knowledge discovery. One such tool is persistent homology, which provides a multiscale description of the homological features within a data set. A useful representation of this homological information is a persistence diagram (PD). Efforts have been made to map PDs into spaces with additional structure valuable to machine learning tasks. We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs. The discriminatory power of PIs is compared against existing methods, showing significant performance gains. We explore the use of PIs with vector-based machine learning tools, such as linear sparse support vector machines, which identify features containing discriminating topological information. Finally, high accuracy inference of parameter values from the dynamic output of a discrete dynamical system (the linked twist map) and a partial differential equation (the anisotropic Kuramoto-Sivashinsky equation) provide a novel application of the discriminatory power of PIs.",
"Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes."
]
} |
1710.10036 | 2765194618 | Deep learning (DL) advances state-of-the-art reinforcement learning (RL), by incorporating deep neural networks in learning representations from the input to RL. However, the conventional deep neural network architecture is limited in learning representations for multi-task RL (MT-RL), as multiple tasks can refer to different kinds of representations. In this paper, we thus propose a novel deep neural network architecture, namely generalization tower network (GTN), which can achieve MT-RL within a single learned model. Specifically, the architecture of GTN is composed of both horizontal and vertical streams. In our GTN architecture, horizontal streams are used to learn representation shared in similar tasks. In contrast, the vertical streams are introduced to be more suitable for handling diverse tasks, which encodes hierarchical shared knowledge of these tasks. The effectiveness of the introduced vertical stream is validated by experimental results. Experimental results further verify that our GTN architecture is able to advance the state-of-the-art MT-RL, via being tested on 51 Atari games. | Taking advantage of recent success in deep learning (DL) @cite_3 and long-existing research in RL @cite_11 , DRL has made great progress, starting from deep @math -learning @cite_6 . Notable advances include double @math -learning @cite_2 , prioritized experience replay @cite_5 , dueling networks @cite_0 , and asynchronous methods @cite_20 . Utilizing advances from these key achievements, A3C in @cite_20 achieves the state-of-the-art performance in mastering RL tasks, reaching human level intelligence in various domains. However, in spite of the existing achievements, agents in above RL works are still limited to learning and mastering one task at a time. | {
"cite_N": [
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_20",
"@cite_11"
],
"mid": [
"2145339207",
"",
"2173564293",
"2952523895",
"2201581102",
"2964043796",
""
],
"abstract": [
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"",
"In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.",
"The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.",
"Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.",
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
""
]
} |
1710.10036 | 2765194618 | Deep learning (DL) advances state-of-the-art reinforcement learning (RL), by incorporating deep neural networks in learning representations from the input to RL. However, the conventional deep neural network architecture is limited in learning representations for multi-task RL (MT-RL), as multiple tasks can refer to different kinds of representations. In this paper, we thus propose a novel deep neural network architecture, namely generalization tower network (GTN), which can achieve MT-RL within a single learned model. Specifically, the architecture of GTN is composed of both horizontal and vertical streams. In our GTN architecture, horizontal streams are used to learn representation shared in similar tasks. In contrast, the vertical streams are introduced to be more suitable for handling diverse tasks, which encodes hierarchical shared knowledge of these tasks. The effectiveness of the introduced vertical stream is validated by experimental results. Experimental results further verify that our GTN architecture is able to advance the state-of-the-art MT-RL, via being tested on 51 Atari games. | MT-RL is crucial in the RL area, which learns more than one task at a time. In this regard, the ultimate goals for MT-RL can be classified as: (1) , i.e., the RL model learned for one task can help in learning other tasks; (2) , i.e., multiple tasks can be handled by learning a single RL model. Above topics have long been reputed as a critical challenge in many AI works @cite_7 @cite_22 @cite_14 . Recent advances in this topic can be generally divided into following two directions. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7"
],
"mid": [
"2114580749",
"2120501001",
"1504638679"
],
"abstract": [
"Transfer learning has recently gained popularity due to the development of algorithms that can successfully generalize information across multiple tasks. This article focuses on transfer in the context of reinforcement learning domains, a general learning framework where an agent acts in an environment to maximize a reward signal. The goals of this article are to (1) familiarize readers with the transfer learning problem in reinforcement learning domains, (2) explain why the problem is both interesting and difficult, (3) present a selection of existing techniques that demonstrate different solutions, and (4) provide representative open problems in the hope of encouraging additional research in this exciting area.",
"Lifelong Machine Learning, or LML, considers systems that can learn many tasks from one or more domains over its lifetime. The goal is to sequentially retain learned knowledge and to selectively transfer that knowledge when learning a new task so as to develop more accurate hypotheses or policies. Following a review of prior work on LML, we propose that it is now appropriate for the AI community to move beyond learning algorithms to more seriously consider the nature of systems that are capable of learning over a lifetime. Reasons for our position are presented and potential counter-arguments are discussed. The remainder of the paper contributes by defining LML, presenting a reference framework that considers all forms of machine learning, and listing several key challenges for and benefits from LML research. We conclude with ideas for next steps to advance the field.",
""
]
} |
1710.10036 | 2765194618 | Deep learning (DL) advances state-of-the-art reinforcement learning (RL), by incorporating deep neural networks in learning representations from the input to RL. However, the conventional deep neural network architecture is limited in learning representations for multi-task RL (MT-RL), as multiple tasks can refer to different kinds of representations. In this paper, we thus propose a novel deep neural network architecture, namely generalization tower network (GTN), which can achieve MT-RL within a single learned model. Specifically, the architecture of GTN is composed of both horizontal and vertical streams. In our GTN architecture, horizontal streams are used to learn representation shared in similar tasks. In contrast, the vertical streams are introduced to be more suitable for handling diverse tasks, which encodes hierarchical shared knowledge of these tasks. The effectiveness of the introduced vertical stream is validated by experimental results. Experimental results further verify that our GTN architecture is able to advance the state-of-the-art MT-RL, via being tested on 51 Atari games. | Inter-Task transfer works primarily focus on how to transfer knowledge from the learned model to a new one, when facing a new task. Upon such transfer, the new learned models are capable of mastering more tasks, in a way that different tasks are learned one by one. However, the main challenge for inter-task transfer is @cite_12 @cite_15 @cite_4 , which means that the newly learned knowledge may completely break previous learned knowledge of old tasks. Some novel advances @cite_16 @cite_18 @cite_1 @cite_26 @cite_25 @cite_13 are produced to this issue. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_1",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2109779438",
"",
"2036963181",
"2560647685",
"2047057213",
"2426267443",
"2174786457",
"",
""
],
"abstract": [
"Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network.",
"",
"Multilayer connectionist models of memory based on the encoder model using the backpropagation learning rule are evaluated. The models are applied to standard recognition memory procedures in which items are studied sequentially and then tested for retention. Sequential learning in these models leads to 2 major problems. First, well-learned information is forgotten rapidly as new information is learned. Second, discrimination between studied items and new items either decreases or is nonmonotonic as a function of learning. To address these problems, manipulations of the network within the multilayer model and several variants of the multilayer model were examined, including a model with prelearned memory and a context model, but none solved the problems. The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning. The first stage of the connectionist revolution in psychology is reaching maturity and perhaps drawing to an end. This stage has been concerned with the exploration of classes of models, and the criteria that have been used to evaluate the success of an application have been necessarily loose. In the early stages of development of a new approach, lax acceptability criteria are appropriate because of the large range of models to be examined. However, there comes a second stage when the models serve as competitors to existing models developed within other theoretical frameworks, and they have to be competitively evaluated according to more stringent criteria. A few notable connectionist models have reached these standards, whereas others have not. The second stage of development also requires that the connectionist models be evaluated in areas where their potential for success is not immediately obvious. One such area is recognition memory. The work presented in this article evaluates several variants of the multilayer connectionist model as accounts of empirical results in this area. I mainly discuss multilayer models using the error-correcting backpropagation algorithm and do not address other architectures such as adaptive resonance schemes (Carpenter & Grossberg, 1987). Before launching into the modeling of recognition memory, I need to specify the aims and rules under which this project was carried out. This is important in a new area of inquiry because there are many divergent views about what needs to be",
"Abstract The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.",
"Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. Our account of this suggests that memories are first stored via synaptic changes in the hippocampal system; that these changes support reinstatement of recent memories in the neocortex; that neocortical synapses change a little on each reinstatement; and that remote memory is based on accumulated neocortical changes. Models that learn via adaptive changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems. Psychological Review, in press",
"Methods and systems for performing a sequence of machine learning tasks. One system includes a sequence of deep neural networks (DNNs), including: a first DNN corresponding to a first machine learning task, wherein the first DNN comprises a first plurality of indexed layers, and each layer in the first plurality of indexed layers is configured to receive a respective layer input and process the layer input to generate a respective layer output; and one or more subsequent DNNs corresponding to one or more respective machine learning tasks, wherein each subsequent DNN comprises a respective plurality of indexed layers, and each layer in a respective plurality of indexed layers with index greater than one receives input from a preceding layer of the respective subsequent DNN, and one or more preceding layers of respective preceding DNNs, wherein a preceding layer is a layer whose index is one less than the current index.",
"The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed \"Actor-Mimic\", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods.",
"",
""
]
} |
1710.10036 | 2765194618 | Deep learning (DL) advances state-of-the-art reinforcement learning (RL), by incorporating deep neural networks in learning representations from the input to RL. However, the conventional deep neural network architecture is limited in learning representations for multi-task RL (MT-RL), as multiple tasks can refer to different kinds of representations. In this paper, we thus propose a novel deep neural network architecture, namely generalization tower network (GTN), which can achieve MT-RL within a single learned model. Specifically, the architecture of GTN is composed of both horizontal and vertical streams. In our GTN architecture, horizontal streams are used to learn representation shared in similar tasks. In contrast, the vertical streams are introduced to be more suitable for handling diverse tasks, which encodes hierarchical shared knowledge of these tasks. The effectiveness of the introduced vertical stream is validated by experimental results. Experimental results further verify that our GTN architecture is able to advance the state-of-the-art MT-RL, via being tested on 51 Atari games. | Multi-Task generalization @cite_21 @cite_8 @cite_24 generally concentrates on how to make an agent master several RL tasks only with a single learned model. Linking pre-knowledge towards any tasks, the agent is greatly confused by the diversity of state representations and strategies. In short, multi-task generalization in RL mainly aims at learning a generalized model across diverse tasks. In this direction, previous works have been proposed with novel models and algorithms that are more capable of generalizing and transferring with shared representations. For example, the latest literature @cite_21 verifies the practicability of learning shared representations of value functions. However, it has been demonstrated that a completely shared model performs poorly @cite_8 . Furthermore, to deal with the problem of , the agent in @cite_8 is designed to have different hidden layers and output layers for each task, remaining shared convolutional layers across all tasks. However, it can only learn to master 3 Atari games. An additional network @cite_24 has been designed to detect which task is the agent facing, and then to favor or select a corresponding stream in the model. | {
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_8"
],
"mid": [
"2951961145",
"2294805292",
""
],
"abstract": [
"Reward function design and exploration time are arguably the biggest obstacles to the deployment of reinforcement learning (RL) agents in the real world. In many real-world tasks, designing a reward function takes considerable hand engineering and often requires additional sensors to be installed just to measure whether the task has been executed successfully. Furthermore, many interesting tasks consist of multiple implicit intermediate steps that must be executed in sequence. Even when the final outcome can be measured, it does not necessarily provide feedback on these intermediate steps. To address these issues, we propose leveraging the abstraction power of intermediate visual representations learned by deep models to quickly infer perceptual reward functions from small numbers of demonstrations. We present a method that is able to identify key intermediate steps of a task from only a handful of demonstration sequences, and automatically identify the most discriminative features for identifying these steps. This method makes use of the features in a pre-trained deep model, but does not require any explicit specification of sub-goals. The resulting reward functions can then be used by an RL agent to learn to perform the task in real-world settings. To evaluate the learned reward, we present qualitative results on two real-world tasks and a quantitative evaluation against a human-designed reward function. We also show that our method can be used to learn a real-world door opening skill using a real robot, even when the demonstration used for reward learning is provided by a human using their own hand. To our knowledge, these are the first results showing that complex robotic manipulation skills can be learned directly and without supervised labels from a video of a human performing the task. Supplementary material and data are available at this https URL",
"We investigate a paradigm in multi-task reinforcement learning (MT-RL) in which an agent is placed in an environment and needs to learn to perform a series of tasks, within this space. Since the environment does not change, there is potentially a lot of common ground amongst tasks and learning to solve them individually seems extremely wasteful. In this paper, we explicitly model and learn this shared structure as it arises in the state-action value space. We will show how one can jointly learn optimal value-functions by modifying the popular Value-Iteration and Policy-Iteration procedures to accommodate this shared representation assumption and leverage the power of multi-task supervised learning. Finally, we demonstrate that the proposed model and training procedures, are able to infer good value functions, even under low samples regimes. In addition to data efficiency, we will show in our analysis, that learning abstractions of the state space jointly across tasks leads to more robust, transferable representations with the potential for better generalization. this shared representation assumption and leverage the power of multi-task supervised learning. Finally, we demonstrate that the proposed model and training procedures, are able to infer good value functions, even under low samples regimes. In addition to data efficiency, we will show in our analysis, that learning abstractions of the state space jointly across tasks leads to more robust, transferable representations with the potential for better generalization.",
""
]
} |
1710.10089 | 2766102766 | In this paper, we compute a conservative approximation of the path-connected components of the free space of a rigid object in a 2D workspace in order to solve two closely related problems: to determine whether there exists a collision-free path between two given configurations, and to verify whether an object can escape arbitrarily far from its initial configuration -- i.e., whether the object is caged. Furthermore, we consider two quantitative characteristics of the free space: the volume of path-connected components and the width of narrow passages. To address these problems, we decompose the configuration space into a set of two-dimensional slices, approximate them as two-dimensional alpha-complexes, and then study the relations between them. This significantly reduces the computational complexity compared to a direct approximation of the free space. We implement our algorithm and run experiments in a three-dimensional configuration space of a simple object showing runtime of less than 2 seconds. | The problem of proving path non-existence has been addressed by @cite_17 in the context of motion planning, motivated by the fact most of the modern sampling-based planning algorithms do not guarantee that two configurations are disconnected, and rely on stopping heuristics in such situations @cite_10 . prove that two configurations are disconnected when the object is too big' or too long' to pass through a gate' between them. In @cite_19 , use approximate cell decomposition and prove path non-existence. They decompose a configuration space into a set of cells and for each cell decide if it lies in the collision space. In @cite_1 propose a somewhat similar approach. There, they randomly sample the configuration space and reconstruct its approximation as an alpha complex. They later use it to check the connectivity between pairs of configurations. | {
"cite_N": [
"@cite_1",
"@cite_19",
"@cite_10",
"@cite_17"
],
"mid": [
"2114306020",
"2068375174",
"",
"2101993361"
],
"abstract": [
"In this paper, we address the problem determining the connectivity of a robot's free configuration space. Our method iteratively builds a constructive proof that two configurations lie in disjoint components of the free configuration space. Our algorithm first generates samples that correspond to configurations for which the robot is in collision with an obstacle. These samples are then weighted by their generalized penetration distance, and used to construct alpha shapes. The alpha shape defines a collection of simplices that are fully contained within the configuration space obstacle region. These simplices can be used to quickly solve connectivity queries, which in turn can be used to define termination conditions for sampling-based planners. Such planners, while typically either resolution complete or probabilistically complete, are not able to determine when a path does not exist, and therefore would otherwise rely on heuristics to determine when the search for a free path should be abandoned. An implementation of the algorithm is provided for the case of a 3D Euclidean configuration space, and a proof of correctness is provided.",
"We present a simple algorithm to check for path non-existence for a low-degree-of-freedom (DOF) robot among static obstacles. Our algorithm is based on approximate cell decomposition of configuration space or C-space. We use C-obstacle cell query to check whether a cell lies entirely inside the C-obstacle region. This reduces the path non-existence problem to checking whether a path exists through the set of all cells that do not lie entirely inside the C-obstacle region. We present a simple and efficient algorithm to perform C-obstacle cell query using generalized penetration depth computation. Our algorithm is simple to implement and we demonstrate its performance on three-DOF and four-DOF robots.",
"",
"Probabilistic road-map (PRM) planners have shown great promise in attacking previously infeasible motion planning problems with many degrees of freedom. Yet when such a planner fails to find a path, it is not clear that no path exists, or that the planner simply did not sample adequately or intelligently the free part of the configuration space. We propose to attack the motion planning problem from the other end, focusing on disconnection proofs, or proofs showing that there exists no solution to the posed motion planning problem. Just as PRM planners avoid generating a complete description of the configuration space, our disconnection provers search for certain special classes of proofs that are compact and easy to find when the motion planning problem is 'obviously impossible,\" avoiding complex geometric and combinatorial calculations. We demonstrate such a prover in action for a simple, yet still realistic, motion planning problem. When it fails, the prover suggests key milestones, or configurations of the robot that can then be passed on and used by a PRM planner. Thus by hitting the motion planning problem from both ends, we hope to resolve the existence of a path, except in truly delicate border-line situations."
]
} |
1710.10089 | 2766102766 | In this paper, we compute a conservative approximation of the path-connected components of the free space of a rigid object in a 2D workspace in order to solve two closely related problems: to determine whether there exists a collision-free path between two given configurations, and to verify whether an object can escape arbitrarily far from its initial configuration -- i.e., whether the object is caged. Furthermore, we consider two quantitative characteristics of the free space: the volume of path-connected components and the width of narrow passages. To address these problems, we decompose the configuration space into a set of two-dimensional slices, approximate them as two-dimensional alpha-complexes, and then study the relations between them. This significantly reduces the computational complexity compared to a direct approximation of the free space. We implement our algorithm and run experiments in a three-dimensional configuration space of a simple object showing runtime of less than 2 seconds. | In this paper, we also aim to study path-connectivity of the free space of the object. Unlike @cite_1 , we do not construct the collision space directly. Instead we decompose it into a finite set of lower dimensional slices'. This allows us to overcome the dimensionality problem without losing any necessary information about the topology of the configuration space. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2114306020"
],
"abstract": [
"In this paper, we address the problem determining the connectivity of a robot's free configuration space. Our method iteratively builds a constructive proof that two configurations lie in disjoint components of the free configuration space. Our algorithm first generates samples that correspond to configurations for which the robot is in collision with an obstacle. These samples are then weighted by their generalized penetration distance, and used to construct alpha shapes. The alpha shape defines a collection of simplices that are fully contained within the configuration space obstacle region. These simplices can be used to quickly solve connectivity queries, which in turn can be used to define termination conditions for sampling-based planners. Such planners, while typically either resolution complete or probabilistically complete, are not able to determine when a path does not exist, and therefore would otherwise rely on heuristics to determine when the search for a free path should be abandoned. An implementation of the algorithm is provided for the case of a 3D Euclidean configuration space, and a proof of correctness is provided."
]
} |
1710.10182 | 2964323748 | Synthesizing face sketches from real photos and its inverse have many applications. However, photo sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https: github.com lidan1 PhotoSketchMAN. | Existing works can be categorized based on multiple factors. Wang al @cite_12 categorize photo-sketch synthesis methods based on model construction techniques into three main classes: 1) subspace learning-based, 2) sparse representation-based, and 3) Bayesian inference-based approaches. Peng al @cite_9 perform the categorization based on representation strategies and come up with three broad approaches: 1) holistic image-based, 2) independent local patch-based, and 3) local patch with spatial constraints-based methods. | {
"cite_N": [
"@cite_9",
"@cite_12"
],
"mid": [
"2344899809",
"1985436611"
],
"abstract": [
"Face sketch–photo synthesis technique has attracted growing attention in many computer vision applications, such as law enforcement and digital entertainment. Existing methods either simply perform the face sketch–photo synthesis on the holistic image or divide the face image into regular rectangular patches ignoring the inherent structure of the face image. In view of such situations, this paper presents a novel superpixel-based face sketch–photo synthesis method by estimating the face structures through image segmentation. In our proposed method, face images are first segmented into superpixels, which are then dilated to enhance the compatibility of neighboring superpixels. Each input face image induces a specific graphical structure modeled by Markov networks. We employ a two-stage synthesis process to learn the face structures through Markov networks constructed from two scales of dilation, respectively. Experiments on several public databases demonstrate that our proposed face sketch–photo synthesis method achieves superior performance compared with the state-of-the-art methods.",
"This paper comprehensively surveys the development of face hallucination (FH), including both face super-resolution and face sketch-photo synthesis techniques. Indeed, these two techniques share the same objective of inferring a target face image (e.g. high-resolution face image, face sketch and face photo) from a corresponding source input (e.g. low-resolution face image, face photo and face sketch). Considering the critical role of image interpretation in modern intelligent systems for authentication, surveillance, law enforcement, security control, and entertainment, FH has attracted growing attention in recent years. Existing FH methods can be grouped into four categories: Bayesian inference approaches, subspace learning approaches, a combination of Bayesian inference and subspace learning approaches, and sparse representation-based approaches. In spite of achieving a certain level of development, FH is limited in its success by complex application conditions such as variant illuminations, poses, or views. This paper provides a holistic understanding and deep insight into FH, and presents a comparative analysis of representative methods and promising future directions."
]
} |
1710.10182 | 2964323748 | Synthesizing face sketches from real photos and its inverse have many applications. However, photo sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https: github.com lidan1 PhotoSketchMAN. | More recently, Peng al @cite_45 proposed a multiple representations-based face sketch photo-synthesis method that adaptively combines multiple representations to represent an image patch by combining multiple features from face images processed using multiple filters. Additionally, they employ Markov networks to model the relationship between neighboring patches. Zhang al @cite_14 employed a sparse representation-based greedy search strategy to first estimate an initial sketch. Candidate image patches from the initial estimated sketch and the template sketch are then selected using multi-scale features. These candidate patches are refined and assembled to obtain the final sketch which is further enhanced using a cascaded regression strategy. Peng al @cite_9 proposed a superpixel-based synthesis method involving two stage synthesis procedure. Wang al @cite_27 recently proposed the use of Bayesian framework consisting of neighbor selection model and weight computation model. They consider spatial neighboring constraint between adjacent image patches for both models in contrast to existing methods where the adjacency constraint is considered for only one of the models. CNN-based method such as @cite_20 and @cite_5 were proposed recently showing promising results. There is also a recent work on face synthesis from facial attribute @cite_44 applying sketch to photo synthesis as a second stage in their approach. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_44",
"@cite_27",
"@cite_45",
"@cite_5",
"@cite_20"
],
"mid": [
"2183595887",
"2344899809",
"2781490755",
"2570189907",
"2436544366",
"2799839429",
"2771349609"
],
"abstract": [
"Heterogeneous image conversion is a critical issue in many computer vision tasks, among which example-based face sketch style synthesis provides a convenient way to make artistic effects for photos. However, existing face sketch style synthesis methods generate stylistic sketches depending on many photo-sketch pairs. This requirement limits the generalization ability of these methods to produce arbitrarily stylistic sketches. To handle such a drawback, we propose a robust face sketch style synthesis method, which can convert photos to arbitrarily stylistic sketches based on only one corresponding template sketch. In the proposed method, a sparse representation-based greedy search strategy is first applied to estimate an initial sketch. Then, multi-scale features and Euclidean distance are employed to select candidate image patches from the initial estimated sketch and the template sketch. In order to further refine the obtained candidate image patches, a multi-feature-based optimization model is introduced. Finally, by assembling the refined candidate image patches, the completed face sketch is obtained. To further enhance the quality of synthesized sketches, a cascaded regression strategy is adopted. Compared with the state-of-the-art face sketch synthesis methods, experimental results on several commonly used face sketch databases and celebrity photos demonstrate the effectiveness of the proposed method.",
"Face sketch–photo synthesis technique has attracted growing attention in many computer vision applications, such as law enforcement and digital entertainment. Existing methods either simply perform the face sketch–photo synthesis on the holistic image or divide the face image into regular rectangular patches ignoring the inherent structure of the face image. In view of such situations, this paper presents a novel superpixel-based face sketch–photo synthesis method by estimating the face structures through image segmentation. In our proposed method, face images are first segmented into superpixels, which are then dilated to enhance the compatibility of neighboring superpixels. Each input face image induces a specific graphical structure modeled by Markov networks. We employ a two-stage synthesis process to learn the face structures through Markov networks constructed from two scales of dilation, respectively. Experiments on several public databases demonstrate that our proposed face sketch–photo synthesis method achieves superior performance compared with the state-of-the-art methods.",
"Automatic synthesis of faces from visual attributes is an important problem in computer vision and has wide applications in law enforcement and entertainment. With the advent of deep generative convolutional neural networks (CNNs), attempts have been made to synthesize face images from attributes and text descriptions. In this paper, we take a different approach, where we formulate the original problem as a stage-wise learning problem. We first synthesize the facial sketch corresponding to the visual attributes and then we reconstruct the face image based on the synthesized sketch. The proposed Attribute2Sketch2Face framework, which is based on a combination of deep Conditional Variational Autoencoder (CVAE) and Generative Adversarial Networks (GANs), consists of three stages: (1) Synthesis of facial sketch from attributes using a CVAE architecture, (2) Enhancement of coarse sketches to produce sharper sketches using a GAN-based framework, and (3) Synthesis of face from sketch using another GAN-based network. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed attribute-based three stage face synthesis method.",
"Exemplar-based face sketch synthesis has been widely applied to both digital entertainment and law enforcement. In this paper, we propose a Bayesian framework for face sketch synthesis, which provides a systematic interpretation for understanding the common properties and intrinsic difference in different methods from the perspective of probabilistic graphical models. The proposed Bayesian framework consists of two parts: the neighbor selection model and the weight computation model. Within the proposed framework, we further propose a Bayesian face sketch synthesis method. The essential rationale behind the proposed Bayesian method is that we take the spatial neighboring constraint between adjacent image patches into consideration for both aforementioned models, while the state-of-the-art methods neglect the constraint either in the neighbor selection model or in the weight computation model. Extensive experiments on the Chinese University of Hong Kong face sketch database demonstrate that the proposed Bayesian method could achieve superior performance compared with the state-of-the-art methods in terms of both subjective perceptions and objective evaluations.",
"Face sketch–photo synthesis plays an important role in law enforcement and digital entertainment. Most of the existing methods only use pixel intensities as the feature. Since face images can be described using features from multiple aspects, this paper presents a novel multiple representations-based face sketch–photo-synthesis method that adaptively combines multiple representations to represent an image patch. In particular, it combines multiple features from face images processed using multiple filters and deploys Markov networks to exploit the interacting relationships between the neighboring image patches. The proposed framework could be solved using an alternating optimization strategy and it normally converges in only five outer iterations in the experiments. Our experimental results on the Chinese University of Hong Kong (CUHK) face sketch database, celebrity photos, CUHK Face Sketch FERET Database, IIIT-D Viewed Sketch Database, and forensic sketches demonstrate the effectiveness of our method for face sketch–photo synthesis. In addition, cross-database and database-dependent style-synthesis evaluations demonstrate the generalizability of this novel method and suggest promising solutions for face identification in forensic science.",
"In this paper, we propose a novel framework based on deep neural networks for face sketch synthesis from a photo. Imitating the process of how artists draw sketches, our framework synthesizes face sketches in a cascaded manner. A content image is first generated that outlines the shape of the face and the key facial features. Textures and shadings are then added to enrich the details of the sketch. We utilize a fully convolutional neural network (FCNN) to create the content image, and propose a style transfer approach to introduce textures and shadings based on a newly proposed pyramid column feature. We demonstrate that our style transfer approach based on the pyramid column feature can not only preserve more sketch details than the common style transfer method, but also surpasses traditional patch based methods. Quantitative and qualitative evaluations suggest that our framework outperforms other state-of-the-arts methods, and can also generalize well to different test images.",
"Sketch portrait generation is of wide applications including digital entertainment and law enforcement. Despite the great progress achieved by existing face sketch generation methods, they mostly yield blurred effects and great deformation over various facial parts. In order to tackle this challenge, we propose a novel composition-aided generative adversarial network (CA-GAN) for sketch portrait generation. First, we utilize paired inputs including a face photo and the corresponding pixel-wise face labels for generating the portrait. Second, we propose an improved pixel loss, termed compositional loss, to focus training on hard-generated components and delicate facial structures. Moreover, we use stacked CA-GANs (stack-CA-GAN) to further rectify defects and add compelling details. Experimental results show that our method is capable of generating identity-preserving, sketch-realistic, and visually comfortable sketch portraits over a wide range of challenging data, and outperforms existing methods. Besides, our methods show considerable generalization ability."
]
} |
1710.10182 | 2964323748 | Synthesizing face sketches from real photos and its inverse have many applications. However, photo sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https: github.com lidan1 PhotoSketchMAN. | In contrast to the traditional methods for photo-sketch synthesis, several researchers have exploited the success of CNNs for synthesis and cross-domain photo-sketch recognition. Face photo-sketch synthesis is considered as an image-to-image translation problem. Zhang al @cite_3 proposed an end-to-end fully convolutional network-based photo-sketch synthesis method. Several methods have been developed for related tasks such as general sketch synthesis @cite_46 , photo-caricature translation @cite_47 and creation of parameterized avatars @cite_34 . | {
"cite_N": [
"@cite_34",
"@cite_46",
"@cite_47",
"@cite_3"
],
"mid": [
"2608907959",
"2952171659",
"2772288692",
"1980093854"
],
"abstract": [
"We study the problem of mapping an input image to a tied pair consisting of a vector of parameters and an image that is created using a graphical engine from the vector of parameters. The mapping's objective is to have the output image as similar as possible to the input image. During training, no supervision is given in the form of matching inputs and outputs. This learning problem extends two literature problems: unsupervised domain adaptation and cross domain transfer. We define a generalization bound that is based on discrepancy, and employ a GAN to implement a network solution that corresponds to this bound. Experimentally, our method is shown to solve the problem of automatically creating avatars.",
"Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.",
"Recently, image-to-image translation has been made much progress owing to the success of conditional Generative Adversarial Networks (cGANs). However, it's still very challenging for translation tasks with the requirement of high-level visual information conversion, such as photo-to-caricature translation that requires satire, exaggeration, lifelikeness and artistry. We present an approach for learning to translate faces in the wild from the source photo domain to the target caricature domain with different styles, which can also be used for other high-level image-to-image translation tasks. In order to capture global structure with local statistics while translation, we design a dual pathway model of cGAN with one global discriminator and one patch discriminator. Beyond standard convolution (Conv), we propose a new parallel convolution (ParConv) to construct Parallel Convolutional Neural Networks (ParCNNs) for both global and patch discriminators, which can combine the information from previous layer with the current layer. For generator, we provide three more extra losses in association with adversarial loss to constrain consistency for generated output itself and with the target. Also the style can be controlled by the input style info vector. Experiments on photo-to-caricature translation of faces in the wild show considerable performance gain of our proposed method over state-of-the-art translation methods as well as its potential real applications.",
"Sketch-based face recognition is an interesting task in vision and multimedia research, yet it is quite challenging due to the great difference between face photos and sketches. In this paper, we propose a novel approach for photo-sketch generation, aiming to automatically transform face photos into detail-preserving personal sketches. Unlike the traditional models synthesizing sketches based on a dictionary of exemplars, we develop a fully convolutional network to learn the end-to-end photo-sketch mapping. Our approach takes whole face photos as inputs and directly generates the corresponding sketch images with efficient inference and learning, in which the architecture is stacked by only convolutional kernels of very small sizes. To well capture the person identity during the photo-sketch transformation, we define our optimization objective in the form of joint generative discriminative minimization. In particular, a discriminative regularization term is incorporated into the photo-sketch generation, enhancing the discriminability of the generated person sketches against other individuals. Extensive experiments on several standard benchmarks suggest that our approach outperforms other state-of-the-arts in both photo sketch generation and face sketch verification."
]
} |
1710.09566 | 2766385301 | Wireless communication systems, such as wireless sensor networks and RFIDs, are increasingly adopted to transfer potential highly sensitive information. Since the wireless medium has a sharing nature, adversaries have a chance to eavesdrop confidential information from the communication systems. Adding artificial noises caused by friendly jammers emerges as a feasible defensive technique against adversaries. This paper studies the schedule strategies of friendly jammers, which are randomly and redundantly deployed in a circumscribed geographical area and can be unrechargeable or rechargeable, to maximize the lifetime of the jammer networks and prevent the cracking of jamming effect made by the eavesdroppers, under the constraints of geographical area, energy consumption, transmission power, and threshold level. An approximation algorithm as baseline is first proposed using the integer linear programming model. To further reduce the computational complexity, a heuristic algorithm based on the greedy strategy that less consumption leads to longer lifetime is also proposed. Finally, extensive simulation results show that the proposed algorithms are effective and efficient. | Wireless communication security has been well studied in recent years. The well-known conventional way for security is to use cryptography technique in the upper layers @cite_11 . However, these approaches are impractical in many wireless communication systems, such as wireless sensor networks and RFIDs, due to the limited computing capacity of sensor nodes and the risk of secret key to be eavesdropped. Then jammers, once used by adversaries to interfere wireless communication @cite_15 @cite_19 @cite_7 , were introduced to prevent confidential information from being wiretapped. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_7",
"@cite_11"
],
"mid": [
"2107831657",
"2149597925",
"2118795709",
""
],
"abstract": [
"We consider a scenario where a sophisticated jammer jams an area in a single-channel wireless sensor network. The jammer controls the probability of jamming and transmission range to cause maximal damage to the network in terms of corrupted communication links. The jammer action ceases when it is detected by a monitoring node in the network, and a notification message is transferred out of the jamming region. The jammer is detected at a monitor node by employing an optimal detection test based on the percentage of incurred collisions. On the other hand, the network computes channel access probability in an effort to minimize the jamming detection plus notification time. In order for the jammer to optimize its benefit, it needs to know the network channel access probability and number of neighbors of the monitor node. Accordingly, the network needs to know the jamming probability of the jammer. We study the idealized case of perfect knowledge by both the jammer and the network about the strategy of one another, and the case where the jammer or the network lack this knowledge. The latter is captured by formulating and solving optimization problems, the solutions of which constitute best responses of the attacker or the network to the worst-case strategy of each other. We also take into account potential energy constraints of the jammer and the network. We extend the problem to the case of multiple observers and adaptable jamming transmission range and propose a intuitive heuristic jamming strategy for that case.",
"Time-critical wireless applications in emerging network systems, such as e-healthcare and smart grids, have been drawing increasing attention in both industry and academia. The broadcast nature of wireless channels unavoidably exposes such applications to jamming attacks. However, existing methods to characterize and detect jamming attacks cannot be applied directly to time-critical networks, whose communication traffic model differs from conventional models. In this paper, we aim at modeling and detecting jamming attacks against time-critical traffic. We introduce a new metric, message invalidation ratio, to quantify the performance of time-critical applications. A key insight that leads to our modeling is that the behavior of a jammer who attempts to disrupt the delivery of a time-critical message can be exactly mapped to the behavior of a gambler who tends to win a gambling game. We show via the gambling-based modeling and real-time experiments that there in general exists a phase transition phenomenon for a time-critical application under jamming attacks: as the probability that a packet is jammed increases from 0 to 1, the message invalidation ratio first increases slightly (even negligibly), then increases dramatically to 1. Based on analytical and experimental results, we further design and implement the JADE (Jamming Attack Detection based on Estimation) system to achieve efficient and robust jamming detection for time-critical wireless networks.",
"802.11a, b, and g standards were designed for deployment in cooperative environments, and hence do not include mechanisms to protect from jamming attacks. In this paper, we explore how to protect 802.11 networks from jamming attacks by having the legitimate transmission hop among channels to hide the transmission from the jammer. Using a combination of mathematical analysis and prototype experimentation in an 802.11a environment, we explore how much throughput can be maintained in comparison to the maintainable throughput in a cooperative, jam-free environment. Our experimental and analytical results show that in today's conventional 802.11a networks, we can achieve up to 60 of the original throughput. Our mathematical analysis allows us to extrapolate the throughput that can be maintained when the constraint on the number of orthogonal channels used for both legitimate communication and for jamming is relaxed.",
""
]
} |
1710.09566 | 2766385301 | Wireless communication systems, such as wireless sensor networks and RFIDs, are increasingly adopted to transfer potential highly sensitive information. Since the wireless medium has a sharing nature, adversaries have a chance to eavesdrop confidential information from the communication systems. Adding artificial noises caused by friendly jammers emerges as a feasible defensive technique against adversaries. This paper studies the schedule strategies of friendly jammers, which are randomly and redundantly deployed in a circumscribed geographical area and can be unrechargeable or rechargeable, to maximize the lifetime of the jammer networks and prevent the cracking of jamming effect made by the eavesdroppers, under the constraints of geographical area, energy consumption, transmission power, and threshold level. An approximation algorithm as baseline is first proposed using the integer linear programming model. To further reduce the computational complexity, a heuristic algorithm based on the greedy strategy that less consumption leads to longer lifetime is also proposed. Finally, extensive simulation results show that the proposed algorithms are effective and efficient. | The issue of extending operational time of battery-powered wireless sensor networks was investigated in @cite_13 . The basic idea is to first organize sensors into a maximal number of disjoint set covers, and then activate these sets in turn successively to monitor all targets. A heuristic algorithm based on mixed integer programming was then proposed to compute the sets. This work is on target monitoring by sensors, not on communication protecting with friendly jammers. Moreover, the assumption that the locations of all targets are known in advance is unrealistic in protecting communication with friendly jammers since the locations of the eavesdroppers are not fixed and often unknown. Battery-powered rechargeable networks have also drawn a lot of attention among researchers @cite_20 @cite_6 @cite_16 @cite_25 @cite_26 @cite_14 . Although these works mainly focused on energy harvesting in sensor networks, they provided a new perspective for the security of wireless communication by rechargeable jammers. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_6",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"2042455779",
"2116385515",
"",
"1988815444",
"2076901266",
"1987103456",
"2110508098"
],
"abstract": [
"Energy harvesting sensor platforms have opened up a new dimension to the design of network protocols. In order to sustain the network operation, the energy consumption rate cannot be higher than the energy harvesting rate, otherwise, sensor nodes will eventually deplete their batteries. In contrast to traditional network resource allocation problems where the resources are static, the time-varying recharging rate presents a new challenge. In this paper, We first explore the performance of an efficient dual decomposition and subgradient method based algorithm, called QuickFix, for computing the data sampling rate and routes. However, fluctuations in recharging can happen at a faster time-scale than the convergence time of the traditional approach. This leads to battery outage and overflow scenarios, that are both undesirable due to missed samples and lost energy harvesting opportunities respectively. To address such dynamics, a local algorithm, called SnapIt, is designed to adapt the sampling rate with the objective of maintaining the battery at a target level. Our evaluations using the TOSSIM simulator show that QuickFix and SnapIt working in tandem can track the instantaneous optimum network utility while maintaining the battery at a target level. When compared with IFRC, a backpressure-based approach, our solution improves the total data rate by 42 on the average while significantly improving the network utility.",
"Wireless rechargeable sensor networks (WRSNs) have emerged as an alternative to solving the challenges of size and operation time posed by traditional battery-powered systems. In this paper, we study a WRSN built from the industrial wireless identification and sensing platform (WISP) and commercial off-the-shelf RFID readers. The paper-thin WISP tags serve as sensors and can harvest energy from RF signals transmitted by the readers. This kind of WRSNs is highly desirable for indoor sensing and activity recognition and is gaining attention in the research community. One fundamental question in WRSN design is how to deploy readers in a network to ensure that the WISP tags can harvest sufficient energy for continuous operation. We refer to this issue as the energy provisioning problem. Based on a practical wireless recharge model supported by experimental data, we investigate two forms of the problem: point provisioning and path provisioning. Point provisioning uses the least number of readers to ensure that a static tag placed in any position of the network will receive a sufficient recharge rate for sustained operation. Path provisioning exploits the potential mobility of tags (e.g., those carried by human users) to further reduce the number of readers necessary: mobile tags can harvest excess energy in power-rich regions and store it for later use in power-deficient regions. Our analysis shows that our deployment methods, by exploiting the physical characteristics of wireless recharging, can greatly reduce the number of readers compared with those assuming traditional coverage models.",
"",
"The emerging wireless energy transfer technology enables charging sensor batteries in a wireless sensor network (WSN) and maintaining perpetual operation of the network. Recent breakthrough in this area has opened up a new dimension to the design of sensor network protocols. In the meanwhile, mobile data gathering has been considered as an efficient alternative to data relaying in WSNs. However, time variation of recharging rates in wireless rechargeable sensor networks imposes a great challenge in obtaining an optimal data gathering strategy. In this paper, we propose a framework of joint Wireless Energy Replenishment and anchor-point based Mobile Data Gathering (WerMDG) in WSNs by considering various sources of energy consumption and time-varying nature of energy replenishment. To that end, we first determine the anchor point selection and the sequence to visit the anchor points. We then formulate the WerMDG problem into a network utility maximization problem which is constrained by flow conversation, energy balance, link and battery capacity and the bounded sojourn time of the mobile collector. Furthermore, we present a distributed algorithm composed of cross-layer data control, scheduling and routing subalgorithms for each sensor node, and sojourn time allocation subalgorithm for the mobile collector at different anchor points. Finally, we give extensive numerical results to verify the convergence of the proposed algorithm and the impact of utility weight on network performance.",
"A critical aspect of applications with wireless sensor networks is network lifetime. Battery-powered sensors are usable as long as they can communicate captured data to a processing node. Sensing and communications consume energy, therefore judicious power management and scheduling can effectively extend operational time. To monitor a set of targets with known locations when ground access in the monitored area is prohibited, one solution is to deploy the sensors remotely, from an aircraft. The loss of precise sensor placement would then be compensated by a large sensor population density in the drop zone, that would improve the probability of target coverage. The data collected from the sensors is sent to a central node for processing. In this paper we propose an efficient method to extend the sensor network operational time by organizing the sensors into a maximal number of disjoint set covers that are activated successively. Only the sensors from the current active set are responsible for monitoring all targets and for transmitting the collected data, while nodes from all other sets are in a low-energy sleep mode. In this paper we address the maximum disjoint set covers problem and we design a heuristic that computes the sets. Theoretical analysis and performance evaluation results are presented to verify our approach.",
"As a pioneering experimental platform of wireless rechargeable sensor networks, the Wireless Identification and Sensing Platform (WISP) is an open-source platform that integrates sensing and computation capabilities to the traditional RFID tags. Different from traditional tags, a RFID-based wireless rechargeable sensor node needs to charge its onboard energy storage above a threshold in order to power its sensing, computation and communication components. Consequently, such charging delay imposes a unique design challenge for deploying wireless rechargeable sensor networks. In this paper, we tackle this problem by planning the optimal movement strategy of the RFID reader, such that the time to charge all nodes in the network above their energy threshold is minimized. We first propose an optimal solution using the linear programming method. To further reduce the computational complexity, we then introduce a heuristic solution with a provable approximation ratio of (1 + θ) (1 - e) by discretizing the charging power on a two-dimensional space. Through extensive evaluations, we demonstrate that our design outperforms the set-cover-based design by an average of 24.7 while the computational complexity is O((N e)2).",
"In this paper, we investigate the problem of maximizing the throughput over a finite-horizon time period for a sensor network with energy replenishment. The finite-horizon problem is important and challenging because it necessitates optimizing metrics over the short term rather than metrics that are averaged over a long period of time. Unlike the infinite-horizon problem, the fact that inefficiencies cannot be made to vanish to infinitesimally small values, means that the finite-horizon problem requires more delicate control. The finite-horizon throughput optimization problem can be formulated as a convex optimization problem, but turns out to be highly complex. The complexity is brought about by the “time coupling property,” which implies that current decisions can influence future performance. To address this problem, we employ a three-step approach. First, we focus on the throughput maximization problem for a single node with renewable energy assuming that the replenishment rate profile for the entire finite-horizon period is known in advance. An energy allocation scheme that is equivalent to computing a shortest path in a simply-connected space is developed and proven to be optimal. We then relax the assumption that the future replenishment profile is known and develop an online algorithm. The online algorithm guarantees a fraction of the optimal throughput. Motivated by these results, we propose a low-complexity heuristic distributed scheme, called NetOnline, in a rechargeable sensor network. We prove that this heuristic scheme is optimal under homogeneous replenishment profiles. Further, in more general settings, we show via simulations that NetOnline significantly outperforms a state-of-the-art infinite-horizon based scheme, and for certain configurations using data collected from a testbed sensor network, it achieves empirical performance close to optimal."
]
} |
1710.09771 | 2765794276 | Dynamical system models with delayed dynamics and small noise arise in a variety of applications in science and engineering. In many applications, stable equilibrium or periodic behavior is critical to a well functioning system. Sufficient conditions for the stability of equilibrium points or periodic orbits of certain deterministic dynamical systems with delayed dynamics are known and it is of interest to understand the sample path behavior of such systems under the addition of small noise. We consider a small noise stochastic delay differential equation (SDDE) with coefficients that depend on the history of the process over a finite delay interval. We obtain asymptotic estimates, as the noise vanishes, on the time it takes a solution of the stochastic equation to exit a bounded domain that is attracted to a stable equilibrium point or periodic orbit of the corresponding deterministic equation. To obtain these asymptotics, we prove a sample path large deviation principle (LDP) for the SDDE that is uniform over initial conditions in bounded sets. The proof of the uniform sample path LDP uses a variational representation for exponential functionals of strong solutions of the SDDE. We anticipate that the overall approach may be useful in proving uniform sample path LDPs for a broad class of infinite-dimensional small noise stochastic equations. | The study of exit time asymptotics for finite-dimensional SDEs is a classical subject in the theory of sample path large deviations, beginning with the work of Freidlin and Wentzell @cite_40 @cite_35 , which culminated in the books @cite_10 @cite_7 . There have been numerous other works related to exit time asymptotics for SDEs, including @cite_11 @cite_3 @cite_8 @cite_46 @cite_9 @cite_53 @cite_39 @cite_45 . In [Chapter 12] daPrato1992 , da Prato and Zabczyk detail a general approach for estimating exit time asymptotics for a class of small noise SPDEs with additive noise. As mentioned above, in @cite_34 @cite_21 @cite_20 the authors obtain exit time asymptotics for a variety of SPDEs with multiplicative noise and in @cite_12 the authors develop a general approach for proving a uniform LDP over bounded sets for a broad class of SPDEs with multiplicative noise and compact semigroups. | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_53",
"@cite_21",
"@cite_3",
"@cite_39",
"@cite_40",
"@cite_45",
"@cite_46",
"@cite_34",
"@cite_10",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"2025631246",
"2021611402",
"",
"2015281088",
"",
"",
"2500226634",
"2030075097",
"2036807120",
"1978065031",
"",
"2007962915",
"",
"",
"",
"2040451645"
],
"abstract": [
"",
"1.Random Perturbations.- 2.Small Random Perturbations on a Finite Time Interval.- 3.Action Functional.- 4.Gaussian Perturbations of Dynamical Systems. Neighborhood of an Equilibrium Point.- 5.Perturbations Leading to Markov Processes.- 6.Markov Perturbations on Large Time Intervals.- 7.The Averaging Principle. Fluctuations in Dynamical Systems with Averaging.- 8.Random Perturbations of Hamiltonian Systems.- 9. The Multidimensional Case.- 10.Stability Under Random Perturbations.- 11.Sharpenings and Generalizations.- References.- Index.",
"",
"Systems with wide bandwidth noise inputs are a common occurrence in stochastic control and communication theory and elsewhere, e.g., tracking or synchronization systems such as phase locked loops (PLL). One is often interested in calculating such quantities as the probability of escape from a desired “error” set, in some time interval, or the mean time for such escape. Diffusion approximations (the system obtained in the limit bandwidth as the @math ) are often used for this since they are easier to analyze. When the noise effects in the physical system are small, one is tempted to do an asymptotic analysis (noise intensity @math ) on the diffusion approximation, and use this for the desired estimates on the original system. Such a procedure does not work in general: the double limit bandwidth @math , intensity @math is not always justified. Under quite broad conditions on the noise processes, it is justified for the systems studied here. We study a particular form of the PLL owing to...",
"",
"",
"",
"We consider diffusion random perturbations of a dynamical systemSt in a domainG⊂Rm which, in particular, may be invariant under the action ofSt. Continuing the study of [K1-K4] we find the asymptotic behavior of the principal eigenvalue of the corresponding generator when the diffusion term tends to zero.",
"In this paper we study the effect on a dynamical system of small random perturbations of the type of white noise: where is the -dimensional Wiener process and as . We are mainly concerned with the effect of these perturbations on long time-intervals that increase with the decreasing . We discuss two problems: the first is the behaviour of the invariant measure of the process as , and the second is the distribution of the position of a trajectory at the first time of its exit from a compact domain. An important role is played in these problems by an estimate of the probability for a trajectory of not to deviate from a smooth function by more than during the time . It turns out that the main term of this probability for small and has the form , where is a certain non-negative functional of . A function , the minimum of over the set of all functions connecting and , is involved in the answers to both the problems. By means of we introduce an independent of perturbations relation of equivalence in the phase-space. We show, under certain assumption, at what point of the phase-space the invariant measure concentrates in the limit. In both the problems we approximate the process in question by a certain Markov chain; the answers depend on the behaviour of on graphs that are associated with this chain. Let us remark that the second problem is closely related to the behaviour of the solution of a Dirichlet problem with a small parameter at the highest derivatives.",
"We consider the Markov diffusion process ξ∈(t), transforming when ɛ=0 into the solution of an ordinary differential equation with a turning point ℴ of the hyperbolic type. The asymptotic behevior as ɛ→0 of the exit time, of its expectation of the probability distribution of exit points for the process ξ∈(t) is studied. These indicate also the asymptotic behavior of solutions of the corresponding singularly perturbed elliptic boundary value problems.",
"",
"Following classical work by Freidlin [Trans. Amer. Math. Soc. (1988) 305 665--657] and subsequent works by Sowers [Ann. Probab. (1992) 20 504--537] and Peszat [Probab. Theory Related Fields (1994) 98 113--136], we prove large deviation estimates for the small noise limit of systems of stochastic reaction--diffusion equations with globally Lipschitz but unbounded diffusion coefficients, however, assuming the reaction terms to be only locally Lipschitz with polynomial growth. This generalizes results of the above mentioned authors. Our results apply, in particular, to systems of stochastic Ginzburg--Landau equations with multiplicative noise.",
"",
"",
"",
"Abstract We consider the exit problem for an asymptotically small random perturbation of a stable dynamical system in a region D. We show the standard large deviations results for the exit distribution and mean exit time, as obtained by Wentzell and Freidlin under the assumption of nontangential drift 〈b, n〉"
]
} |
1710.09813 | 2766218623 | The predictive power and overall computational efficiency of Diffusion-convolutional neural networks make them an attractive choice for node classification tasks. However, a naive dense-tensor-based implementation of DCNNs leads to @math memory complexity which is prohibitive for large graphs. In this paper, we introduce a simple method for thresholding input graphs that provably reduces memory requirements of DCNNs to O(N) (i.e. linear in the number of nodes in the input) without significantly affecting predictive performance. | Neural networks for graphs were introduced by @cite_1 and followed by @cite_2 , in departure from the traditional approach of transforming the graph into a simpler representation which could then be tackled by conventional machine learning algorithms. Both the works used recursive neural networks for processing graph data, requiring repeated application of contraction maps until node representations reach a stable state. @cite_3 proposed two generalizations of CNNs to signals defined on general domains; one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. This was followed by @cite_10 , which used these techniques to address a setting where the graph structure is not known a priori, and needs to be inferred. However, the parametrization of CNNs developed in @cite_3 @cite_10 are dependent on the input graph size, while that of DCNNs or sDCNNs are not, making the technique transferable, i.e., a DCNN or sDCNN learned on one graph can be applied to another. @cite_7 proposed a CNN approach which extracts locally connected regions of the input graph, requiring the definition of node ordering as a pre-processing step. | {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_10"
],
"mid": [
"2406128552",
"1501856433",
"1662382123",
"2116341502",
"637153065"
],
"abstract": [
"Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.",
"In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model.",
"Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
"Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | Long and Rinard @cite_15 study the search space of patch generation systems. In our work, we consider failure-oblivious decision sequences, which are fundamentally different: while a code patch is a permanent modification to the behavior, a failure-oblivious decision sequence only impacts one single execution, with no effect or regression on subsequent executions, even if they execute the same statement. What’s interesting is that in both cases, contrary to the initial intuition of the research community, there is a multiplicity of possible patches. Long and Rinard’s paper is the first one to study this for static patches, our paper is possibly the first one to comprehensively show that this phenomenon exists for failure-oblivious decision sequences. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2951219051"
],
"abstract": [
"We present the first systematic analysis of the characteristics of patch search spaces for automatic patch generation systems. We analyze the search spaces of two current state-of-the-art systems, SPR and Prophet, with 16 different search space configurations. Our results are derived from an analysis of 1104 different search spaces and 768 patch generation executions. Together these experiments consumed over 9000 hours of CPU time on Amazon EC2. The analysis shows that 1) correct patches are sparse in the search spaces (typically at most one correct patch per search space per defect), 2) incorrect patches that nevertheless pass all of the test cases in the validation test suite are typically orders of magnitude more abundant, and 3) leveraging information other than the test suite is therefore critical for enabling the system to successfully isolate correct patches. We also characterize a key tradeoff in the structure of the search spaces. Larger and richer search spaces that contain correct patches for more defects can actually cause systems to find fewer, not more, correct patches. We identify two reasons for this phenomenon: 1) increased validation times because of the presence of more candidate patches and 2) more incorrect patches that pass the test suite and block the discovery of correct patches. These fundamental properties, which are all characterized for the first time in this paper, help explain why past systems often fail to generate correct patches and help identify challenges, opportunities, and productive future directions for the field."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | There are several automatic recovery techniques. One of the earliest techniques is Ammann and Knight's data diversity'' @cite_18 , that aims at enabling the computation of a program in the presence of failures. The idea of data diversity is that, when a failure occurs, the input data is changed so that the new input resulting from the change does not result in the failure. The assumption is that the output based on this artificial input, through an inverse transformation, remains acceptable in the domain under consideration. The input transformations can be seen as a kind of failure-oblivious model. As such, our protocol could be used to reason on the search space of data diversity. | {
"cite_N": [
"@cite_18"
],
"mid": [
"1979868167"
],
"abstract": [
"Data diversity is described, and the results of a pilot study are presented. The regions of the input space that cause failure for certain experimental programs are discussed, and data reexpression, the way in which alternate input data sets can be obtained, is examined. A description is given of the retry block which is the data-diverse equivalent of the recovery block, and a model of the retry block, together with some empirical results is presented. N-copy programming which is the data-diverse equivalent of N-version programming is considered, and a simple model and some empirical results are also given. >"
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | @cite_0 present a language for the specification of data structure invariants. The invariant specification is used to verify and repair the consistency of data structure instances at runtime. In their work, do not study the associated search space. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2098010463"
],
"abstract": [
"We present a system that accepts a specification of key data structure consistency constraints, then dynamically detects and repairs violations of these constraints, enabling the program to continue to execute productively even in the face of otherwise crippling errors. Our experience using our system indicates that the specifications are relatively easy to develop once one understands the data structures. Furthermore, for our set of benchmark applications, our system can effectively repair inconsistent data structures and enable the program to continue to operate successfully."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | @cite_12 presents a technique to avoid illegal memory accesses by adding additional code around each memory operation during the compilation process. For example, the additional code verifies at runtime that the program only uses the allocated memory. If the memory access is outside the allocated memory, the access is ignored instead crashing with a segmentation fault. We apply different decisions to handle a given failure (and not a single code, hard-coded in the injected code), and we use an oracle to reason about the viability of the decision. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1525451871"
],
"abstract": [
"We present a new technique, failure-oblivious computing, that enables servers to execute through memory errors without memory corruption. Our safe compiler for C inserts checks that dynamically detect invalid memory accesses. Instead of terminating or throwing an exception, the generated code simply discards invalid writes and manufactures values to return for invalid reads, enabling the server to continue its normal execution path. We have applied failure-oblivious computing to a set of widely-used servers from the Linux-based open-source computing environment. Our results show that our techniques 1) make these servers invulnerable to known security attacks that exploit memory errors, and 2) enable the servers to continue to operate successfully to service legitimate requests and satisfy the needs of their users even after attacks trigger their memory errors. We observed several reasons for this successful continued execution. When the memory errors occur in irrelevant computations, failure-oblivious computing enables the server to execute through the memory errors to continue on to execute the relevant computation. Even when the memory errors occur in relevant computations, failure-oblivious computing converts requests that trigger unanticipated and dangerous execution paths into anticipated invalid inputs, which the error-handling logic in the server rejects. Because servers tend to have small error propagation distances (localized errors in the computation for one request tend to have little or no effect on the computations for subsequent requests), redirecting reads that would otherwise cause addressing errors and discarding writes that would otherwise corrupt critical data structures (such as the call stack) localizes the effect of the memory errors, prevents addressing exceptions from terminating the computation, and enables the server to continue on to successfully process subsequent requests. The overall result is a substantial extension of the range of requests that the server can successfully process."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | @cite_4 proposes ClearView, a system for automatically handling errors in production. The system consists of monitoring the system execution on low-level registers to learn invariants. Those invariants are then monitored, and if a violation of an invariant is detected ClearView forces the restoration. From an engineering perspective, the difference is we reason on decision sequences, while ClearView analyzes each decision in isolation. From a scientific perspective, our work finely characterizes the search space and the outcomes of failure-oblivious computing based on execution modification. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2099866050"
],
"abstract": [
"We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention. ClearView (1) observes normal executions to learn invariants thatcharacterize the application's normal behavior, (2) uses error detectors to distinguish normal executions from erroneous executions, (3) identifies violations of learned invariants that occur during erroneous executions, (4) generates candidate repair patches that enforce selected invariants by changing the state or flow of control to make the invariant true, and (5) observes the continued execution of patched applications to select the most successful patch. ClearView is designed to correct errors in software with high availability requirements. Aspects of ClearView that make it particularly appropriate for this context include its ability to generate patches without human intervention, apply and remove patchesto and from running applications without requiring restarts or otherwise perturbing the execution, and identify and discard ineffective or damaging patches by evaluating the continued behavior of patched applications. ClearView was evaluated in a Red Team exercise designed to test its ability to successfully survive attacks that exploit security vulnerabilities. A hostile external Red Team developed ten code injection exploits and used these exploits to repeatedly attack an application protected by ClearView. ClearView detected and blocked all of the attacks. For seven of the ten exploits, ClearView automatically generated patches that corrected the error, enabling the application to survive the attacks and continue on to successfully process subsequent inputs. Finally, the Red Team attempted to make Clear-View apply an undesirable patch, but ClearView's patch evaluation mechanism enabled ClearView to identify and discard both ineffective patches and damaging patches."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | Rx @cite_8 is a runtime repair system based on changing the environment upon failures. Rx employs checkpoint-and-rollback for re-executing the buggy code when failures happen. The differences are as follows: 1) Rx does not change the execution itself but the environment 2) the search space of Rx is smaller (a set of predefined strategies) 3) Rx’ experiment does not include systematic exploration of the search space. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2110137598"
],
"abstract": [
"Many applications demand availability. Unfortunately, software failures greatly reduce system availability. Prior work on surviving software failures suffers from one or more of the following limitations: Required application restructuring, inability to address deterministic software bugs, unsafe speculation on program execution, and long recovery time.This paper proposes an innovative safe technique, called Rx, which can quickly recover programs from many types of software bugs, both deterministic and non-deterministic. Our idea, inspired from allergy treatment in real life, is to rollback the program to a recent checkpoint upon a software failure, and then to re-execute the program in a modified environment. We base this idea on the observation that many bugs are correlated with the execution environment, and therefore can be avoided by removing the \"allergen\" from the environment. Rx requires few to no modifications to applications and provides programmers with additional feedback for bug diagnosis.We have implemented RX on Linux. Our experiments with four server applications that contain six bugs of various types show that RX can survive all the six software failures and provide transparent fast recovery within 0.017-0.16 seconds, 21-53 times faster than the whole program restart approach for all but one case (CVS). In contrast, the two tested alternatives, a whole program restart approach and a simple rollback and re-execution without environmental changes, cannot successfully recover the three servers (Squid, Apache, and CVS) that contain deterministic bugs, and have only a 40 recovery rate for the server (MySQL) that contains a non-deterministic concurrency bug. Additionally, RX's checkpointing system is lightweight, imposing small time and space overheads."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | @cite_10 introduces the idea of “recovery shepherding” in a system called RCV. Upon certain errors (null dereferences and divide by zero), recovery shepherding consists in returning a manufactured value, as for failure-oblivious computing. The key idea of recovery shepherding is to track the manufactured values so as to see 1) whether they are passed to system calls or files and 2) whether they disappear. The key difference with our work lies in the reasoning about the effect of the combinations (by storing and keeping information about the actual valid decision sequences). | {
"cite_N": [
"@cite_10"
],
"mid": [
"2080640552"
],
"abstract": [
"We present a system, RCV, for enabling software applications to survive divide-by-zero and null-dereference errors. RCV operates directly on off-the-shelf, production, stripped x86 binary executables. RCV implements recovery shepherding, which attaches to the application process when an error occurs, repairs the execution, tracks the repair effects as the execution continues, contains the repair effects within the application process, and detaches from the process after all repair effects are flushed from the process state. RCV therefore incurs negligible overhead during the normal execution of the application. We evaluate RCV on all divide-by-zero and null-dereference errors available in the CVE database [2] from January 2011 to March 2013 that 1) provide publicly-available inputs that trigger the error which 2) we were able to use to trigger the reported error in our experimental environment. We collected a total of 18 errors in seven real world applications, Wireshark, the FreeType library, Claws Mail, LibreOffice, GIMP, the PHP interpreter, and Chromium. For 17 of the 18 errors, RCV enables the application to continue to execute to provide acceptable output and service to its users on the error-triggering inputs. For 13 of the 18 errors, the continued RCV execution eventually flushes all of the repair effects and RCV detaches to restore the application to full clean functionality. We perform a manual analysis of the source code relevant to our benchmark errors, which indicates that for 11 of the 18 errors the RCV and later patched versions produce identical or equivalent results on all inputs."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | @cite_1 presents a system to defend against deadlocks at runtime. The system first detects synchronization patterns of deadlocks, and when the pattern is detected, the system avoids re-occurrences of the deadlock with additional locks. The pattern detection is related to the detector of instances of the fault model under consideration. However, do not explore and compare alternative locking strategies. We note that our protocol may be plugged on top of their systems to explore the search space of locking sequences. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1591458180"
],
"abstract": [
"Deadlock immunity is a property by which programs, once afflicted by a given deadlock, develop resistance against future occurrences of that and similar deadlocks. We describe a technique that enables programs to automatically gain such immunity without assistance from programmers or users. We implemented the technique for both Java and POSIX threads and evaluated it with several real systems, including MySQL, JBoss, SQLite, Apache ActiveMQ, Limewire, and Java JDK. The results demonstrate effectiveness against real deadlock bugs, while incurring modest performance overhead and scaling to 1024 threads. We therefore conclude that deadlock immunity offers programmers and users an attractive tool for coping with elusive deadlocks."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | Hosek and Cadar @cite_20 switch between application versions when a bug is detected. This technique can handle failures because some bugs disappear while others appear between versions. We can also use our protocol to systematically explore the sequences of runtime jumps across versions. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2151497944"
],
"abstract": [
"Software systems are constantly evolving, with new versions and patches being released on a continuous basis. Unfortunately, software updates present a high risk, with many releases introducing new bugs and security vulnerabilities. We tackle this problem using a simple but effective multi-version based approach. Whenever a new update becomes available, instead of upgrading the software to the new version, we run the new version in parallel with the old one; by carefully coordinating their executions and selecting the behaviour of the more reliable version when they diverge, we create a more secure and dependable multi-version application. We implemented this technique in Mx, a system targeting Linux applications running on multi-core processors, and show that it can be applied successfully to several real applications such as Coreutils, a set of user-level UNIX applications; Lighttpd, a popular web server used by several high-traffic websites such as Wikipedia and YouTube; and Redis, an advanced key-value data structure server used by many well-known services such as GitHub and Flickr."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | Assure @cite_14 is a self-healing system based on checkpointing and error virtualization. Error virtualization consists of handling an unknown and unrecoverable error with error handling code that is already present in the system yet designed for handling other errors. While Assure does self-healing by opportunistic reuse of already present recovery code, our failure-oblivious model handles failures by modifying the state or flow. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2152475836"
],
"abstract": [
"Software failures in server applications are a significant problem for preserving system availability. We present ASSURE, a system that introduces rescue points that recover software from unknown faults while maintaining both system integrity and availability, by mimicking system behavior under known error conditions. Rescue points are locations in existing application code for handling a given set of programmer-anticipated failures, which are automatically repurposed and tested for safely enabling fault recovery from a larger class of (unanticipated) faults. When a fault occurs at an arbitrary location in the program, ASSURE restores execution to an appropriate rescue point and induces the program to recover execution by virtualizing the program's existing error-handling facilities. Rescue points are identified using fuzzing, implemented using a fast coordinated checkpoint-restart mechanism that handles multi-process and multi-threaded applications, and, after testing, are injected into production code using binary patching. We have implemented an ASSURE Linux prototype that operates without application source code and without base operating system kernel changes. Our experimental results on a set of real-world server applications and bugs show that ASSURE enabled recovery for all of the bugs tested with fast recovery times, has modest performance overhead, and provides automatic self-healing orders of magnitude faster than current human-driven patch deployment methods."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | @cite_21 repair web applications at runtime with a set of manually written, API-specific alternatives rules. This set can be seen as a hardcoded set of failure-oblivious decision sequences. On the contrary, we do not require a list of alternatives but instead relies on an abstract failure-oblivious model that is automatically instantiated at runtime. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2007777090"
],
"abstract": [
"We present a technique that finds and executes workarounds for faulty Web applications automatically and at runtime. Automatic workarounds exploit the inherent redundancy of Web applications, whereby a functionality of the application can be obtained through different sequences of invocations of Web APIs. In general, runtime workarounds are applied in response to a failure, and require that the application remain in a consistent state before and after the execution of a workaround. Therefore, they are ideally suited for interactive Web applications, since those allow the user to act as a failure detector with minimal effort, and also either use read-only state or manage their state through a transactional data store. In this paper we focus on faults found in the access libraries of widely used Web applications such as Google Maps. We start by classifying a number of reported faults of the Google Maps and YouTube APIs that have known workarounds. From those we derive a number of general and API-specific program-rewriting rules, which we then apply to other faults for which no workaround is known. Our experiments show that workarounds can be readily deployed within Web applications, through a simple client-side plug-in, and that program-rewriting rules derived from elementary properties of a common library can be effective in finding valid and previously unknown workarounds."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | Berger and Zorn @cite_11 show that is possible to effectively tolerate memory errors and provide probabilistic memory safety by randomizing the memory allocation and providing memory replication. Exterminator @cite_7 provides more sophisticated fault tolerance than @cite_11 by performing fault localization before applying memory padding. The work by @cite_17 exploits a specific hardware feature called ECC-memory for detecting illegal memory accesses at runtime. The idea of the paper is to use the consistency checks of the ECC-memory to detect illegal memory accesses (for instance due to buffer overflow). Both techniques are semantically equivalent in the normal case. We have reasoned about the search space of execution modifications that are not semantically equivalent, where one taken decision can impact the rest of the computation. | {
"cite_N": [
"@cite_17",
"@cite_7",
"@cite_11"
],
"mid": [
"2098809490",
"2130745898",
"2136938453"
],
"abstract": [
"Memory leaks and memory corruption are two major forms of software bugs that severely threaten system availability and security. According to the US-CERT vulnerability notes database, 68 of all reported vulnerabilities in 2003 were caused by memory leaks or memory corruption. Dynamic monitoring tools, such as the state-of-the-art Purify, are commonly used to detect memory leaks and memory corruption. However, most of these tools suffer from high overhead, with up to a 20 times slowdown, making them infeasible to be used for production-runs. This paper proposes a tool called SafeMem to detect memory leaks and memory corruption on-the-fly during production-runs. This tool does not rely on any new hardware support. Instead, it makes a novel use of existing ECC memory technology and exploits intelligent dynamic memory usage behavior analysis to detect memory leaks and corruption. We have evaluated SafeMem with seven real-world applications that contain memory leak or memory corruption bugs. SafeMem detects all tested bugs with low overhead (only 1.6 -14.4 ), 2-3 orders of magnitudes smaller than Purify. Our results also show that ECC-protection is effective in pruning false positives for memory leak detection, and in reducing the amount of memory waste (by a factor of 64-74) used for memory monitoring in memory corruption detection compared to page-protection.",
"Programs written in C and C++ are susceptible to memory errors, including buffer overflows and dangling pointers. These errors, whichcan lead to crashes, erroneous execution, and security vulnerabilities, are notoriously costly to repair. Tracking down their location in the source code is difficult, even when the full memory state of the program is available. Once the errors are finally found, fixing them remains challenging: even for critical security-sensitive bugs, the average time between initial reports and the issuance of a patch is nearly one month. We present Exterminator, a system that automatically correct sheap-based memory errors without programmer intervention. Exterminator exploits randomization to pinpoint errors with high precision. From this information, Exterminator derives runtime patches that fix these errors both in current and subsequent executions. In addition, Exterminator enables collaborative bug correction by merging patches generated by multiple users. We present analytical and empirical results that demonstrate Exterminator's effectiveness at detecting and correcting both injected and real faults.",
"Applications written in unsafe languages like C and C++ are vulnerable to memory errors such as buffer overflows, dangling pointers, and reads of uninitialized data. Such errors can lead to program crashes, security vulnerabilities, and unpredictable behavior. We present DieHard, a runtime system that tolerates these errors while probabilistically maintaining soundness. DieHard uses randomization and replication to achieve probabilistic memory safety by approximating an infinite-sized heap. DieHard's memory manager randomizes the location of objects in a heap that is at least twice as large as required. This algorithm prevents heap corruption and provides a probabilistic guarantee of avoiding memory errors. For additional safety, DieHard can operate in a replicated mode where multiple replicas of the same application are run simultaneously. By initializing each replica with a different random seed and requiring agreement on output, the replicated version of Die-Hard increases the likelihood of correct execution because errors are unlikely to have the same effect across all replicas. We present analytical and experimental results that show DieHard's resilience to a wide range of memory errors, including a heap-based buffer overflow in an actual application."
]
} |
1710.09722 | 2765935404 | High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions. | @cite_3 present a technique to assist the developers to locate the root cause of memory errors. In this work Jeffrey at al. suppress the execution of the statement that produces the failure and repeats this procedure until the execution of the program does not fail. The last suppressed statement should according to Jeffrey at al. be close to the root cause of the memory error. This approach uses the failure-oblivious strategy to continue the execution of the program in order to gain knowledge. In this case, they want to identify the root cause of the memory error. The main difference is that don't use the failure-oblivious technique to fix the application but to get knowledge during the execution of the program. We focus on the failure-oblivious computing search space to understand the failure-oblivious behavior and we also consider different failure-oblivious strategies to handle failures. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2005139304"
],
"abstract": [
"By studying the behavior of several programs that crash due to memory errors, we observed that locating the errors can be challenging because significant propagation of corrupt memory values can occur prior to the point of the crash. In this article, we present an automated approach for locating memory errors in the presence of memory corruption propagation. Our approach leverages the information revealed by a program crash: when a crash occurs, this reveals a subset of the memory corruption that exists in the execution. By suppressing (nullifying) the effect of this known corruption during execution, the crash is avoided and any remaining (hidden) corruption may then be exposed by subsequent crashes. The newly exposed corruption can then be suppressed in turn. By iterating this process until no further crashes occur, the first point of memory corruption—and the likely root cause of the program failure—can be identified. However, this iterative approach may terminate prematurely, since programs may not crash even when memory corruption is present during execution. To address this, we show how crashes can be exposed in an execution by manipulating the relative ordering of particular variables within memory. By revealing crashes through this variable re-ordering, the effectiveness and applicability of the execution suppression approach can be improved. We describe a set of experiments illustrating the effectiveness of our approach in consistently and precisely identifying the first points of memory corruption in executions that fail due to memory errors. We also discuss a baseline software implementation of execution suppression that incurs an average overhead of 7.2x, and describe how to reduce this overhead to 1.8x through hardware support."
]
} |
1710.09871 | 2766253253 | Robots are finding new applications where physical interaction with a human is necessary, such as manufacturing, healthcare, and social tasks. Accordingly, the field of physical human–robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that—despite provisions for the human to modify the robot's current trajectory—the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real-time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone. | As previously mentioned, the robot can actively follow the human's desired trajectory during pHRI; for example, @cite_6 uses a Kalman filter to track the human's desired timing of a point-to-point cooperative motion. Erden and Tomiyama @cite_2 measure the controller force as a means to detect the human's intent and update the robot's desired position---when the human stops interacting with the robot, the robot maintains its most recent position. Similarly, Li and Ge @cite_34 employ neural networks to learn the mapping from measured inputs to the human's desired position, which the robot then tracks using an impedance controller. Although the human is able to change the robot's desired behavior, these methods require the human to guide the robot along their intended trajectory. | {
"cite_N": [
"@cite_34",
"@cite_6",
"@cite_2"
],
"mid": [
"2043536379",
"2147653242",
"2109590511"
],
"abstract": [
"In this paper, adaptive impedance control is proposed for a robot collaborating with a human partner, in the presence of unknown motion intention of the human partner and unknown robot dynamics. Human motion intention is defined as the desired trajectory in the limb model of the human partner, which is extremely difficult to obtain considering the nonlinear and time-varying property of the limb model. Neural networks are employed to cope with this problem, based on which an online estimation method is developed. The estimated motion intention is integrated into the developed adaptive impedance control, which makes the robot follow a given target impedance model. Under the proposed method, the robot is able to actively collaborate with its human partner, which is verified through experiment studies.",
"A first step towards truly versatile robot assistants consists of building up experience with simple tasks such as the cooperative manipulation of objects. This paper extends the state-of-the-art by developing an assistant which actively cooperates during the point-to-point transportation of an object. Besides using admittance control to react to interaction forces generated by its operator, the robot estimates the intended human motion and uses this identified motion to move along with the operator. The offered level of assistance can be scaled, which is vital to give the operator the opportunity to gradually learn how to interact with the system. Experiments revealed that, while the robot is programmed to adapt to the human motion, the operator also adapts to the offered assistance. When using the robot assistant the required forces to move the load are greatly reduced and the operators report that the assistance feels comfortable and natural.",
"In this paper, a physically interactive control scheme is developed for a manipulator robot arm. The human touches the robot and applies force in order to make it behave as he she likes. The communication between the robot and the human is maintained by a physical contact with no sensors. The intent of the human is estimated by observing the change in control effort. The robot receives the estimated human intent and updates its position reference accordingly. The developed method uses the principle of conservation of zero momentum for position-controlled systems. A switching scheme is developed that goes between the modes of pure impedance control with a fixed-position reference and interactive control under human intent. The switching mechanism uses neither a physical switch nor a sensor; it observes the human intent and puts the robot into interactive mode, if there is any. When the human intent disappears, the robot goes into the pure-impedance-control mode, thus stabilizing in the left position."
]
} |
1710.09871 | 2766253253 | Robots are finding new applications where physical interaction with a human is necessary, such as manufacturing, healthcare, and social tasks. Accordingly, the field of physical human–robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that—despite provisions for the human to modify the robot's current trajectory—the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real-time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone. | In contrast with @cite_6 @cite_47 @cite_50 , shared control for comanipulation instead allows both the human and robot to dynamically exchange leader and follower roles @cite_24 @cite_39 @cite_37 @cite_9 . In work by Li . @cite_48 , game theory is used to adaptively determine the robot's role, such that the robot gradually becomes a leader when the human does not exert significant interaction forces. Kucukyilmaz . @cite_24 have a similar criteria for role exchange, but find that performance decreases when visual and vibrotactile feedback informs the human about the robot's current role. Medina . @cite_32 utilize stochastic data showing how humans have previously completed the task; at times where the robot's prediction does not match the human's behavior, prediction uncertainty and risk-sensitive optimal control decide how much assistance the robot should provide. We note that shared control methods such as @cite_15 , @cite_48 , and @cite_32 leverage optimal control theory in order to modulate the controller feedback gains, but---unlike our proposed approach---they track a fixed desired trajectory. | {
"cite_N": [
"@cite_37",
"@cite_15",
"@cite_48",
"@cite_9",
"@cite_32",
"@cite_6",
"@cite_39",
"@cite_24",
"@cite_50",
"@cite_47"
],
"mid": [
"",
"1997932767",
"1967904302",
"",
"2038581140",
"2147653242",
"",
"2103075707",
"",
""
],
"abstract": [
"",
"While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks.",
"In this paper, we propose a role adaptation method for human-robot shared control. Game theory is employed for fundamental analysis of this two-agent system. An adaptation law is developed such that the robot is able to adjust its own role according to the human's intention to lead or follow, which is inferred through the measured interaction force. In the absence of human interaction forces, the adaptive scheme allows the robot to take the lead and complete the task by itself. On the other hand, when the human persistently exerts strong forces that signal an unambiguous intent to lead, the robot yields and becomes the follower. Additionally, the full spectrum of mixed roles between these extreme scenarios is afforded by continuous online update of the control that is shared between both agents. Theoretical analysis shows that the resulting shared control is optimal with respect to a two-agent coordination game. Experimental results illustrate better overall performance, in terms of both error and effort, compared with fixed-role interactions.",
"",
"Intuitive and effective physical assistance is an essential requirement for robots sharing their workspace with humans. Application domains reach from manufacturing and service robotics via rehabilitation and mobility aids to education and training. In this context, assistance based on human behavior anticipation has shown superior performance in terms of human effort minimization. However, when a robot's expectations mismatch a human intentions, undesired interaction forces appear incurring safety risks and discomfort. Human behavior prediction is, therefore, a crucial issue: It enables effective anticipation but potentially produces disagreements when prediction errors occur. In this paper, we present a novel control scheme for anticipatory haptic assistance where robot behavior adapts to prediction uncertainty. Following a data-driven stochastic modeling approach, robot assistance is synthesized solving a risk-sensitive optimal control problem, where the cost function and plant dynamics are affected by model uncertainty. The proposed approach is objectively and subjectively evaluated in an experiment with human users. Results indicate that our method outperforms other assistive control approaches in terms of perceived helpfulness and human effort minimization.",
"A first step towards truly versatile robot assistants consists of building up experience with simple tasks such as the cooperative manipulation of objects. This paper extends the state-of-the-art by developing an assistant which actively cooperates during the point-to-point transportation of an object. Besides using admittance control to react to interaction forces generated by its operator, the robot estimates the intended human motion and uses this identified motion to move along with the operator. The offered level of assistance can be scaled, which is vital to give the operator the opportunity to gradually learn how to interact with the system. Experiments revealed that, while the robot is programmed to adapt to the human motion, the operator also adapts to the offered assistance. When using the robot assistant the required forces to move the load are greatly reduced and the operators report that the assistance feels comfortable and natural.",
"",
"In human-computer collaboration involving haptics, a key issue that remains to be solved is to establish an intuitive communication between the partners. Even though computers are widely used to aid human operators in teleoperation, guidance, and training, because they lack the adaptability, versatility, and awareness of a human, their ability to improve efficiency and effectiveness in dynamic tasks is limited. We suggest that the communication between a human and a computer can be improved if it involves a decision-making process in which the computer is programmed to infer the intentions of the human operator and dynamically adjust the control levels of the interacting parties to facilitate a more intuitive interaction setup. In this paper, we investigate the utility of such a dynamic role exchange mechanism, where partners negotiate through the haptic channel to trade their control levels on a collaborative task. We examine the energy consumption, the work done on the manipulated object, and the joint efficiency in addition to the task performance. We show that when compared to an equal control condition, a role exchange mechanism improves task performance and the joint efficiency of the partners. We also show that augmenting the system with additional informative visual and vibrotactile cues, which are used to display the state of interaction, allows the users to become aware of the underlying role exchange mechanism and utilize it in favor of the task. These cues also improve the user's sense of interaction and reinforce his her belief that the computer aids with the execution of the task.",
"",
""
]
} |
1710.09871 | 2766253253 | Robots are finding new applications where physical interaction with a human is necessary, such as manufacturing, healthcare, and social tasks. Accordingly, the field of physical human–robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that—despite provisions for the human to modify the robot's current trajectory—the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real-time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone. | For shared control situations where the robot is continually in contact with an unpredictable environment---such as during a human-robot sawing task--- @cite_0 propose multi-modal communication interfaces, including force, myoelectric, and visual sensors. By contrast, we consider tasks where the robot is attempting to avoid obstacles, and we focus on using pHRI forces without additional feedback. Besides comanipulation, shared control has also been applied to teleoperation, where the human interacts with a haptic device, and that device commands the motions of an external robot. In work by @cite_33 @cite_49 , the authors leverage haptic devices to tune the desired trajectory parameters of a quadrotor in real time. These proposed adjustments are then autonomously corrected by the system to ensure path feasibility, regularity, and collision avoidance; afterwards, the haptic devices offer feedback about the resulting trajectory deformation. | {
"cite_N": [
"@cite_0",
"@cite_33",
"@cite_49"
],
"mid": [
"2564917287",
"2056600097",
""
],
"abstract": [
"This paper presents a novel approach for human-robot cooperation in tasks with dynamic uncertainties. The essential element of the proposed method is a multi-modal interface that provides the robot with the feedback about the human motor behaviour in real-time. The human muscle activity measurements and the arm force manipulability properties encode the information about the motion and impedance, and the intended configuration of the task frame, respectively. Through this human-in-the-loop framework, the developed hybrid controller of the robot can adapt its actions to provide the desired motion and impedance regulation in different phases of the cooperative task. We experimentally evaluate the proposed approach in a two-person sawing task that requires an appropriate complementary behaviour from the two agents.",
"This work extends the framework of bilateral shared control of mobile robots with the aim of increasing the robot autonomy and decreasing the operator commitment. We consider persistent autonomous behaviors where a cyclic motion must be executed by the robot. The human operator is in charge of modifying online some geometric properties of the desired path. This is then autonomously processed by the robot in order to produce an actual path guaranteeing: i) tracking feasibility, ii) collision avoidance with obstacles, iii) closeness to the desired path set by the human operator, and iv) proximity to some points of interest. A force feedback is implemented to inform the human operator of the global deformation of the path rather than using the classical mismatch between desired and executed motion commands. Physically-based simulations, with human hardware-in-the-loop and a quadrotor UAV as robotic platform, demonstrate the feasibility of the method.",
""
]
} |
1710.09871 | 2766253253 | Robots are finding new applications where physical interaction with a human is necessary, such as manufacturing, healthcare, and social tasks. Accordingly, the field of physical human–robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that—despite provisions for the human to modify the robot's current trajectory—the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real-time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone. | Interestingly, even in settings where pHRI does not occur, other works have used the human's actions to cause changes in the robot's desired trajectory. Mainprice and Berenson @cite_18 present one such scheme, where the robot explicitly tries to avoid collisions with the human. Based on a prediction of the human's workspace occupancy, the robot selects the desired trajectory which minimizes human-robot interference and task completion time. Indeed, as pointed out by Chao and Thomaz @cite_3 , if the human and robot are working together in close proximity---but wish to avoid physical contact---the workspace becomes a shared resource. To support these methods, human-subject studies have experimentally found that deforming the desired trajectory in response to human actions objectively and subjectively improves human-robot collaboration @cite_27 . However, it is not necessarily clear which trajectory deformation is optimal; as a result, there is interest in understanding how humans modify their own trajectories during similar situations. Pham and Nakamura @cite_21 develop a trajectory deformation algorithm which preserves the original trajectory's affine invariant features, with applications in transferring recorded human motions to humanoid robots. | {
"cite_N": [
"@cite_27",
"@cite_18",
"@cite_21",
"@cite_3"
],
"mid": [
"2162130878",
"1992461594",
"2141275328",
"2295887377"
],
"abstract": [
"Objective:The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort.Background:The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction.Method:We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires.Results:When working with the adaptive robot, participants completed the task 5.57 faster, with 19.9...",
"In this paper we present a framework that allows a human and a robot to perform simultaneous manipulation tasks safely in close proximity. The proposed framework is based on early prediction of the human's motion. The prediction system, which builds on previous work in the area of gesture recognition, generates a prediction of human workspace occupancy by computing the swept volume of learned human motion trajectories. The motion planner then plans robot trajectories that minimize a penetration cost in the human workspace occupancy while interleaving planning and execution. Multiple plans are computed in parallel, one for each robot task available at the current time, and the trajectory with the least cost is selected for execution. We test our framework in simulation using recorded human motions and a simulated PR2 robot. Our results show that our framework enables the robot to avoid the human while still accomplishing the robot's task, even in cases where the initial prediction of the human's motion is incorrect. We also show that taking into account the predicted human workspace occupancy in the robot's motion planner leads to safer and more efficient interactions between the user and the robot than only considering the human's current configuration.",
"We propose a new approach to deform robot trajectories based on affine transformations. At the heart of our approach is the concept of affine invariance: Trajectories are deformed in order to avoid unexpected obstacles or to achieve new objectives but, at the same time, certain definite features of the original motions are preserved. Such features include, for instance, trajectory smoothness, periodicity, affine velocity, or more generally, all affine-invariant features, which are of particular importance in human-centered applications. Furthermore, this approach enables one to “convert” the constraints and optimization objectives regarding the deformed trajectory into constraints and optimization objectives regarding the matrix of the deformation in a natural way, making constraints satisfaction and optimization substantially easier and faster in many cases. As illustration, we present an application to the transfer of human movements to humanoid robots while preserving equiaffine velocity, a well-established invariant of human hand movements. Building on the presented affine deformation framework, we finally revisit the concept of trajectory redundancy from the viewpoint of group theory.",
"The goal of this work is to develop computational models of social intelligence that enable robots to work side by side with humans, solving problems and achieving task goals through dialogue and collaborative manipulation. A defining problem of collaborative behavior in an embodied setting is the manner in which multiple agents make use of shared resources. In a situated dialogue, these resources include physical bottlenecks such as objects or spatial regions, and cognitive bottlenecks such as the speaking floor. For a robot to function as an effective collaborative partner with a human, it must be able to seize and yield such resources appropriately according to social expectations. We describe a general framework that uses timed Petri nets for the modeling and execution of robot speech, gaze, gesture, and manipulation for collaboration. The system dynamically monitors resource requirements and availability to control real-time turn-taking decisions over resources that are shared with humans, reasoning about different resource types independently. We evaluate our approach with an experiment in which our robot Simon performs a collaborative assembly task with 26 different human partners, showing that the multimodal reciprocal approach results in superior task performance, fluency, and balance of control."
]
} |
1710.09871 | 2766253253 | Robots are finding new applications where physical interaction with a human is necessary, such as manufacturing, healthcare, and social tasks. Accordingly, the field of physical human–robot interaction (pHRI) has leveraged impedance control approaches, which support compliant interactions between human and robot. However, a limitation of traditional impedance control is that—despite provisions for the human to modify the robot's current trajectory—the human cannot affect the robot's future desired trajectory through pHRI. In this paper, we present an algorithm for physically interactive trajectory deformations which, when combined with impedance control, allows the human to modulate both the actual and desired trajectories of the robot. Unlike related works, our method explicitly deforms the future desired trajectory based on forces applied during pHRI, but does not require constant human guidance. We present our approach and verify that this method is compatible with traditional impedance control. Next, we use constrained optimization to derive the deformation shape. Finally, we describe an algorithm for real-time implementation, and perform simulations to test the arbitration parameters. Experimental results demonstrate reduction in the human's effort and improvement in the movement quality when compared to pHRI with impedance control alone. | Finally, from a motion planning perspective, optimization methods can be used to find human-like and collision-free desired trajectories by iteratively deforming the initial desired trajectory. For example, in work on redundant manipulators by Brock and Khatib @cite_42 , an initial desired trajectory from start to goal is given, and then potential fields are used to deform this trajectory in response to moving obstacles. More recently, Zucker . developed CHOMP @cite_35 , an optimization approach which uses covariant gradient descent to find the minimum cost desired trajectory; each step down the gradient deforms the previous desired trajectory. STOMP, from @cite_14 , generates a set of noisy deformations around the current desired trajectory, and then combines the beneficial aspects of those deformations to update the desired trajectory. TrajOpt, from @cite_45 , uses sequentially convex optimization to deform the initial desired trajectory---the resulting deformation satisfies both equality and inequality constraints. We observe that the discussed trajectory optimization schemes, @cite_42 @cite_25 @cite_16 @cite_8 @cite_13 , are not intended for pHRI, but have been successfully utilized to share control during teleoperation @cite_43 . | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_8",
"@cite_42",
"@cite_43",
"@cite_45",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"2161819990",
"2019965290",
"",
"1993999483",
"2105925198",
"2142224528",
"",
"",
""
],
"abstract": [
"In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradient techniques to iteratively improve the quality of an initial trajectory, optimizing a functional that trades off between a smoothness and an obstacle avoidance component. CHOMP can be used to locally optimize feasible trajectories, as well as to solve motion planning queries, converging to low-cost trajectories even when initialized with infeasible ones. It uses Hamiltonian Monte Carlo to alleviate the problem of convergence to high-cost local minima (and for probabilistic completeness), and is capable of respecting hard constraints along the trajectory. We present extensive experiments with CHOMP on manipulation and locomotion tasks, using seven-degree-of-freedom manipulators and a rough-terrain quadruped robot.",
"We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.",
"",
"Robotic applications are expanding into dynamic, unstructured, and populated environments. Mechanisms specifically designed to address the challenges arising in these environments, such as humanoid robots, exhibit high kinematic complexity. This creates the need for new algorithmic approaches to motion generation, capable of performing task execution and real-time obstacle avoidance in high-dimensional configuration spaces. The elastic strip framework presented in this paper enables the execution of a previously planned motion in a dynamic environment for robots with many degrees of freedom. To modify a motion in reaction to changes in the environment, real-time obstacle avoidance is combined with desired posture behavior. The modification of a motion can be performed in a task-consistent manner, leaving task execution unaffected by obstacle avoidance and posture behavior. The elastic strip framework also encompasses methods to suspend task behavior when its execution becomes inconsistent with other const...",
"In shared control teleoperation, the robot assists the user in accomplishing the desired task, making teleoperation easier and more seamless. Rather than simply executing the user's input, which is hindered by the inadequacies of the interface, the robot attempts to predict the user's intent, and assists in accomplishing it. In this work, we are interested in the scientific underpinnings of assistance: we propose an intuitive formalism that captures assistance as policy blending, illustrate how some of the existing techniques for shared control instantiate it, and provide a principled analysis of its main components: prediction of user intent and its arbitration with the user input. We define the prediction problem, with foundations in inverse reinforcement learning, discuss simplifying assumptions that make it tractable, and test these on data from users teleoperating a robotic manipulator. We define the arbitration problem from a control-theoretic perspective, and turn our attention to what users consider good arbitration. We conduct a user study that analyzes the effect of different factors on the performance of assistance, indicating that arbitration should be contextual: it should depend on the robot's confidence in itself and in the user, and even the particulars of the user. Based on the study, we discuss challenges and opportunities that a robot sharing the control with the user might face: adaptation to the context and the user, legibility of behavior, and the closed loop between prediction and user behavior.",
"We present a new optimization-based approach for robotic motion planning among obstacles. Like CHOMP (Covariant Hamiltonian Optimization for Motion Planning), our algorithm can be used to find collision-free trajectories from naA¯ve, straight-line initializations that might be in collision. At the core of our approach are (a) a sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary, and (b) an efficient formulation of the no-collisions constraint that directly considers continuous-time safety Our algorithm is implemented in a software package called TrajOpt. We report results from a series of experiments comparing TrajOpt with CHOMP and randomized planners from OMPL, with regard to planning time and path quality. We consider motion planning for 7 DOF robot arms, 18 DOF full-body robots, statically stable walking motion for the 34 DOF Atlas humanoid robot, and physical experiments with the 18 DOF PR2. We also apply TrajOpt to plan curvature-constrained steerable needle trajectories in the SE(3) configuration space and multiple non-intersecting curved channels within 3D-printed implants for intracavitary brachytherapy. Details, videos, and source code are freely available at: http: rll.berkeley.edu trajopt ijrr.",
"",
"",
""
]
} |
1710.09505 | 2765390540 | While deeper and wider neural networks are actively pushing the performance limits of various computer vision and machine learning tasks, they often require large sets of labeled data for effective training and suffer from extremely high computational complexity. In this paper, we will develop a new framework for training deep neural networks on datasets with limited labeled samples using cross-network knowledge projection which is able to improve the network performance while reducing the overall computational complexity significantly. Specifically, a large pre-trained teacher network is used to observe samples from the training data. A projection matrix is learned to project this teacher-level knowledge and its visual representations from an intermediate layer of the teacher network to an intermediate layer of a thinner and faster student network to guide and regulate its training process. Both the intermediate layers from the teacher network and the injection layers from the student network are adaptively selected during training by evaluating a joint loss function in an iterative manner. This knowledge projection framework allows us to use crucial knowledge learned by large networks to guide the training of thinner student networks, avoiding over-fitting, achieving better network performance, and significantly reducing the complexity. Extensive experimental results on benchmark datasets have demonstrated that our proposed knowledge projection approach outperforms existing methods, improving accuracy by up to 4 while reducing network complexity by 4 to 10 times, which is very attractive for practical applications of deep neural networks. | With the dramatically increased demand of computational resources by deep neural networks, there have been considerable efforts to design smaller and thinner networks from larger pre-trained network in the literature. A typical approach is to prune unnecessary parameters in trained networks while retaining similar outputs. Instead of removing close-to-zero weights in the network, LeCunn al proposed Optimal Brain Damage (OBD) @cite_24 which uses the second order derivatives to find trade-off between performance and model complexity. Hassibi al followed this work and proposed Optimal Brain Surgeon (OBS) @cite_9 which outperforms the original OBD method, but was more computationally intensive. Han al @cite_12 developed a method to prune state-of-art CNN models without loss of accuracy. Based on this work, the method of deep compression @cite_57 achieved better network compression ratio using ensembles of parameter pruning, trained quantization and Huffman coding, achieved 3 to 4 times layer-wise speed up and reduced the model size of VGG-16 @cite_20 by 49 times. This line of work focuses on pruning unnecessary connections and weights in trained models and optimizing for better computation and storage efficiency. | {
"cite_N": [
"@cite_9",
"@cite_24",
"@cite_57",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"2114766824",
"2119144962",
"2963674932",
"1686810756"
],
"abstract": [
"",
"We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.",
"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
} |
1710.09306 | 2765440119 | In this paper, we investigate the application of text classification methods to support law professionals. We present several experiments applying machine learning techniques to predict with high accuracy the ruling of the French Supreme Court and the law area to which a case belongs to. We also investigate the influence of the time period in which a ruling was made on the form of the case description and the extent to which we need to mask information in a full case ruling to automatically obtain training and test data that resembles case descriptions. We developed a mean probability ensemble system combining the output of multiple SVM classifiers. We report results of 98 average F1 score in predicting a case ruling, 96 F1 score for predicting the law area of a case, and 87.07 F1 score on estimating the date of a ruling. | While text classification methods were investigated and applied with commercial or forensic goals in mind for other areas (e.g. serving better content or products to users through user profiling @cite_9 and sentiment analysis, identifying potential criminals @cite_5 , crimes @cite_3 , or anti-social behavior @cite_15 ), an area where these methods have been under-explored, although both commercial and forensic interests exist, is the legal domain. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_3",
"@cite_15"
],
"mid": [
"2077504940",
"",
"2251411520",
"2138969065"
],
"abstract": [
"Social media sites are now the most popular destination for Internet users, providing social scientists with a great opportunity to understand online behaviour. There are a growing number of research papers related to social media, a small number of which focus on personality prediction. To date, studies have typically focused on the Big Five traits of personality, but one area which is relatively unexplored is that of the anti-social traits of narcissism, Machiavellians and psychopathy, commonly referred to as the Dark Triad. This study explored the extent to which it is possible to determine anti-social personality traits based on Twitter use. This was performed by comparing the Dark Triad and Big Five personality traits of 2,927 Twitter users with their profile attributes and use of language. Analysis shows that there are some statistically significant relationships between these variables. Through the use of crowd sourced machine learning algorithms, we show that machine learning provides useful prediction rates, but is imperfect in predicting an individual's Dark Triad traits from Twitter activity. While predictive models may be unsuitable for predicting an individual's personality, they may still be of practical importance when models are applied to large groups of people, such as gaining the ability to see whether anti-social traits are increasing or decreasing over a population. Our results raise important questions related to the unregulated use of social media analysis for screening purposes. It is important that the practical and ethical implications of drawing conclusions about personal information embedded in social media sites are better understood.",
"",
"The widespread use of deception in online sources has motivated the need for methods to automatically profile and identify deceivers. This work explores deception, gender and age detection in short texts using a machine learning approach. First, we collect a new open domain deception dataset also containing demographic data such as gender and age. Second, we extract feature sets including n-grams, shallow and deep syntactic features, semantic features, and syntactic complexity and readability metrics. Third, we build classifiers that aim to predict deception, gender, and age. Our findings show that while deception detection can be performed in short texts even in the absence of a predetermined domain, gender and age prediction in deceptive texts is a challenging task. We further explore the linguistic differences in deceptive content that relate to deceivers gender and age and find evidence that both age and gender play an important role in people’s word choices when fabricating lies.",
"User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers."
]
} |
1710.09306 | 2765440119 | In this paper, we investigate the application of text classification methods to support law professionals. We present several experiments applying machine learning techniques to predict with high accuracy the ruling of the French Supreme Court and the law area to which a case belongs to. We also investigate the influence of the time period in which a ruling was made on the form of the case description and the extent to which we need to mask information in a full case ruling to automatically obtain training and test data that resembles case descriptions. We developed a mean probability ensemble system combining the output of multiple SVM classifiers. We report results of 98 average F1 score in predicting a case ruling, 96 F1 score for predicting the law area of a case, and 87.07 F1 score on estimating the date of a ruling. | @cite_6 proposed a system of classifying sentences for the task of summarizing court rulings and, with the use of SVM and Naive Bayes applied to Bag of Words, TF-IDF, and dense features (e.g. position of sentence in document), obtained 65 For court ruling prediction, the task closest to our present work, a few papers have been published: @cite_22 , using extremely randomized trees, reported 70 As evidenced in this section predicting court rulings is a new area for text classification methods and our paper contributes in this direction, achieving performance substantially higher than in previous work @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_6"
],
"mid": [
"2744053072",
"2784604116",
"2120879767"
],
"abstract": [
"In this paper, we investigate the application of text classification methods to predict the law area and the decision of cases judged by the French Supreme Court. We also investigate the influence of the time period in which a ruling was made over the textual form of the case description and the extent to which it is necessary to mask the judge's motivation for a ruling to emulate a real-world test scenario. We report results of 96 f1 score in predicting a case ruling, 90 f1 score in predicting the law area of a case, and 75.9 f1 score in estimating the time span when a ruling has been issued using a linear Support Vector Machine (SVM) classifier trained on lexical features.",
"Building upon developments in theoretical and applied machine learning, as well as the efforts of various scholars including Guimera and Sales-Pardo (2011), (2004), and (2004), we construct a model designed to predict the voting behavior of the Supreme Court of the United States. Using the extremely randomized tree method first proposed in Geurts, et al (2006), a method similar to the random forest approach developed in Breiman (2001), as well as novel feature engineering, we predict more than sixty years of decisions by the Supreme Court of the United States (1953-2013). Using only data available prior to the date of decision, our model correctly identifies 69.7 of the Court’s overall affirm reverse decisions and correctly forecasts 70.9 of the votes of individual justices across 7,700 cases and more than 68,000 justice votes. Our performance is consistent with the general level of prediction offered by prior scholars. However, our model is distinctive as it is the first robust, generalized, and fully predictive model of Supreme Court voting behavior offered to date. Our model predicts six decades of behavior of thirty Justices appointed by thirteen Presidents. With a more sound methodological foundation, our results represent a major advance for the science of quantitative legal prediction and portend a range of other potential applications, such as those described in Katz (2013).\u0000",
"We describe research carried out as part of a text summarisation project for the legal domain for which we use a new XML corpus of judgments of the UK House of Lords. These judgments represent a particularly important part of public discourse due to the role that precedents play in English law. We present experimental results using a range of features and machine learning techniques for the task of predicting the rhetorical status of sentences and for the task of selecting the most summary-worthy sentences from a document. Results for these components are encouraging as they achieve state-of-the-art accuracy using robust, automatically generated cue phrase information. Sample output from the system illustrates the potential of summarisation technology for legal information management systems and highlights the utility of our rhetorical annotation scheme as a model of legal discourse, which provides a clear means for structuring summaries and tailoring them to different types of users."
]
} |
1710.09511 | 2766049792 | Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system's inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications. The success of the module relies on the assumption that the network's computation and reasoning is represented in its internal layer activations. While in principle InterpNET could be applied to any existing classification architecture, it is evaluated via an image classification and explanation task. Experiments on a CUB bird classification and explanation dataset show qualitatively and quantitatively that the model is able to generate high-quality explanations. While the current state-of-the-art METEOR score on this dataset is 29.2, InterpNET achieves a much higher METEOR score of 37.9. | Many recent advances in machine learning have come from deep learning, which employs a model composed of multiple non-linear transformations and gradient-based training to fit the underlying parameters. For vision tasks, deep convolutional networks have achieved state-of-the-art in object detection @cite_15 , face detection @cite_8 and many others. For language understanding tasks, deep networks have also achieved state-of-the-art in machine translation @cite_5 , summarization @cite_14 and many others. At the intersection of vision and language there have been breakthrough results in captioning @cite_0 , visual question answering @cite_6 and many others. The most closely related work to this is on generating visual explanations @cite_3 . The authors propose a method for deep visual explanations which uses a standard captioning model but also incorporates a loss function which rewards class specificity. The experimental validation of InterpNET is largely based on the machinery they used for fine-grained bird classification. InterpNET, which is much simpler, in fact outperforms the method in @cite_3 on measures of both accuracy and class-specificity. | {
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_5",
"@cite_15"
],
"mid": [
"1843891098",
"",
"2950761309",
"2949467366",
"",
"2133564696",
"2117539524"
],
"abstract": [
"Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"",
"We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).",
"Clearly explaining a rationale for a classification decision to an end-user can be as important as the decision itself. Existing approaches for deep visual recognition are generally opaque and do not output any justification text; contemporary vision-language models can describe image content but fail to take into account class-discriminative image aspects which justify visual predictions. We propose a new model that focuses on the discriminating properties of the visible object, jointly predicts a class label, and explains why the predicted label is appropriate for the image. We propose a novel loss function based on sampling and reinforcement learning that learns to generate sentences that realize a global sentence property, such as class specificity. Our results on a fine-grained bird species classification dataset show that our model is able to generate explanations which are not only consistent with an image but also more discriminative than descriptions produced by existing captioning methods.",
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements."
]
} |
1710.09506 | 2766363395 | Energy storage is a crucial component of the smart grid, since it provides the ability to buffer transient fluctuations of the energy supply from renewable sources. Even without a load, energy storage systems experience a reduction of the stored energy through self-discharge. In some storage technologies, the rate of self-discharge can exceed 50 of the stored energy per day. In this paper, we investigate the self-discharge phenomenon in energy storage using a queueing system model, which we refer to as leakage queue. When the average net charge is positive, we discover that the leakage queue operates in one of two regimes: a leakage-dominated regime and a capacity-dominated regime. We find that in the leakage-dominated regime, the stored energy stabilizes at a point that is below the storage capacity. Under suitable independence assumptions for energy supply and demand, the stored energy in this regime closely follows a normal distribution. We present two methods for computing probabilities of underflow and overflow at a leakage queue. The methods are validated in a numerical example where the energy supply resembles a wind energy source. | Energy storage plays a major role in many aspects of the smart grid, and, consequently, there is a extensive literature on their analysis. The electrical grid requires that power generation and demand load are continuously balanced. This becomes more involved with time-variable renewable energy sources and storage systems absorbing the variations from such sources. Smart grid approaches that take the perspective of a utility operator are concerned with placement, sizing, and control of energy storage systems with the goal to optimally balance power @cite_4 @cite_24 @cite_25 , reduce power generation costs @cite_32 , or operational costs @cite_5 . Works in this area are frequently formulated as optimal control or optimization problems, with the objective to devise distributed algorithms that achieve a desired operating point. | {
"cite_N": [
"@cite_4",
"@cite_32",
"@cite_24",
"@cite_5",
"@cite_25"
],
"mid": [
"2015809171",
"2125820577",
"",
"2026301025",
"2964281127"
],
"abstract": [
"The high variability of renewable energy is a major obstacle toward its increased penetration. Energy storage can help reduce the power imbalance due to the mismatch between the available renewable power and the load. How much can storage reduce this power imbalance? How much storage is needed to achieve this reduction? This paper presents a simple analytic model that leads to some answers to these questions. Considering the multitimescale grid operation, we formulate the power imbalance problem for each timescale as an infinite horizon stochastic control problem and show that a greedy policy minimizes the average magnitude of the residual power imbalance. Observing from the wind power data that in shorter timescales the power imbalance can be modeled as an iid zero-mean Laplace distributed process, we obtain closed form expressions for the minimum cost and the stationary distribution of the stored power. We show that most of the reduction in the power imbalance can be achieved with relatively small storage capacity. In longer timescales, the correlation in the power imbalance cannot be ignored. As such, we relax the iid assumption to a weakly dependent stationary process and quantify the limit on the minimum cost for arbitrarily large storage capacity.",
"We formulate the optimal placement, sizing and control of storage devices in a power network to minimize generation costs with the intent of load shifting. We assume deterministic demand, a linearized DC approximated power flow model and a fixed available storage budget. Our main result proves that when the generation costs are convex and nondecreasing, there always exists an optimal storage capacity allocation that places zero storage at generation-only buses that connect to the rest of the network via single links. This holds regardless of the demand profiles, generation capacities, line-flow limits and characteristics of the storage technologies. Through a counterexample, we illustrate that this result is not generally true for generation buses with multiple connections. For specific network topologies, we also characterize the dependence of the optimal generation cost on the available storage budget, generation capacities and flow constraints.",
"",
"Electric energy storage devices are prime candidates for demand load management in the smart power grid. In this work, we address the optimal energy storage control problem from the side of the utility operator. The operator controller receives power demand requests with different power requirements and durations that are activated immediately. The controller has access to one energy storage device of finite capacity. The objective is to devise an energy storage control policy that minimizes long-term average grid operational cost. The cost is a convex function of instantaneous power demand that is satisfied from the grid, and it reflects the fact that each additional unit of power needed to serve demands is more expensive as the demand load increases. For the online dynamic control problem, we derive a threshold-based control policy that attempts to maintain balanced power consumption from the grid at all times, in the presence of continual generation and completion of demands. The policy adaptively performs charging or discharging of the storage device. The former increases power consumption from the grid and the latter satisfies part of the grid demand from the stored energy. We prove that the policy is asymptotically optimal as the storage capacity becomes large, and we numerically show that it performs very well even for finite capacity. The off-line problem over a finite time horizon that assumes a priori known power consumption to be satisfied at all times, is formulated and solved with Dynamic Programming. Finally, we show that the model, approach and structure of the optimal policy can be extended to also account for a renewable source that feeds the storage device.",
"Phase balancing is essential to safe power system operation. We consider a substation connected to multiple phases, each with single-phase loads, generation, and energy storage. A representative of the substation operates the system and aims to minimize the cost of all phases and to balance loads among phases. We first consider ideal energy storage with lossless charging and discharging, and propose both centralized and distributed real-time algorithms taking into account system uncertainty. The proposed algorithm does not require any system statistics and asymptotically achieves the minimum system cost with large energy storage. We then extend the algorithm to accommodate more realistic non-ideal energy storage that has imperfect charging and discharging. The performance of the proposed algorithm is evaluated through extensive simulation and compared with that of a benchmark greedy algorithm. Simulation shows that our algorithm leads to strong performance over a wide range of storage characteristics."
]
} |
1710.09506 | 2766363395 | Energy storage is a crucial component of the smart grid, since it provides the ability to buffer transient fluctuations of the energy supply from renewable sources. Even without a load, energy storage systems experience a reduction of the stored energy through self-discharge. In some storage technologies, the rate of self-discharge can exceed 50 of the stored energy per day. In this paper, we investigate the self-discharge phenomenon in energy storage using a queueing system model, which we refer to as leakage queue. When the average net charge is positive, we discover that the leakage queue operates in one of two regimes: a leakage-dominated regime and a capacity-dominated regime. We find that in the leakage-dominated regime, the stored energy stabilizes at a point that is below the storage capacity. Under suitable independence assumptions for energy supply and demand, the stored energy in this regime closely follows a normal distribution. We present two methods for computing probabilities of underflow and overflow at a leakage queue. The methods are validated in a numerical example where the energy supply resembles a wind energy source. | Demand side management @cite_15 takes the perspective of an energy user, and broadly refers to measures that encourage users to become more energy efficient. As one form of demand side management, demand response refers to methods for short-term reductions in energy consumption. By creating incentives to users, demand response seeks to match elastic demands with fluctuating renewable energy sources. In @cite_28 @cite_10 , demand response is formulated as a utility maximization problem where dynamic pricing incentivizes individual users to benefit the overall system. Studies on demand response apply a wide range of methods, from coordination between appliances @cite_26 , bounds on prediction errors @cite_29 , and game-theoretic approaches @cite_33 . | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_29",
"@cite_15",
"@cite_10"
],
"mid": [
"1992679131",
"2068060907",
"2042454807",
"2036001459",
"2026647281",
"2050495423"
],
"abstract": [
"We describe algorithmic enhancements to a decision-support tool that residential consumers can utilize to optimize their acquisition of electrical energy services. The decision-support tool optimizes energy services provision by enabling end users to first assign values to desired energy services, and then scheduling their available distributed energy resources (DER) to maximize net benefits. We chose particle swarm optimization (PSO) to solve the corresponding optimization problem because of its straightforward implementation and demonstrated ability to generate near-optimal schedules within manageable computation times. We improve the basic formulation of cooperative PSO by introducing stochastic repulsion among the particles. The improved DER schedules are then used to investigate the potential consumer value added by coordinated DER scheduling. This is computed by comparing the end-user costs obtained with the enhanced algorithm simultaneously scheduling all DER, against the costs when each DER schedule is solved separately. This comparison enables the end users to determine whether their mix of energy service needs, available DER and electricity tariff arrangements might warrant solving the more complex coordinated scheduling problem, or instead, decomposing the problem into multiple simpler optimizations.",
"Most of the existing demand-side management programs focus primarily on the interactions between a utility company and its customers users. In this paper, we present an autonomous and distributed demand-side energy management system among users that takes advantage of a two-way digital communication infrastructure which is envisioned in the future smart grid. We use game theory and formulate an energy consumption scheduling game, where the players are the users and their strategies are the daily schedules of their household appliances and loads. It is assumed that the utility company can adopt adequate pricing tariffs that differentiate the energy usage in time and level. We show that for a common scenario, with a single utility company serving multiple customers, the global optimal performance in terms of minimizing the energy costs is achieved at the Nash equilibrium of the formulated energy consumption scheduling game. The proposed distributed demand-side energy management strategy requires each user to simply apply its best response strategy to the current total load and tariffs in the power distribution system. The users can maintain privacy and do not need to reveal the details on their energy consumption schedules to other users. We also show that users will have the incentives to participate in the energy consumption scheduling game and subscribing to such services. Simulation results confirm that the proposed approach can reduce the peak-to-average ratio of the total energy demand, the total energy costs, as well as each user's individual daily electricity charges.",
"Demand side management will be a key component of future smart grid that can help reduce peak load and adapt elastic demand to fluctuating generations. In this paper, we consider households that operate different appliances including PHEVs and batteries and propose a demand response approach based on utility maximization. Each appliance provides a certain benefit depending on the pattern or volume of power it consumes. Each household wishes to optimally schedule its power consumption so as to maximize its individual net benefit subject to various consumption and power flow constraints. We show that there exist time-varying prices that can align individual optimality with social optimality, i.e., under such prices, when the households selfishly optimize their own benefits, they automatically also maximize the social welfare. The utility company can thus use dynamic pricing to coordinate demand responses to the benefit of the overall system. We propose a distributed algorithm for the utility company and the customers to jointly compute this optimal prices and demand schedules. Finally, we present simulation results that illustrate several interesting properties of the proposed scheme.",
"Demand response is crucial for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large-scale storage if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are inaccurate, and so we provide analytic, worst-case bounds on the impact of prediction error on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.",
"This paper mainly focuses on demand side management and demand response, including drivers and benefits, shiftable load scheduling methods and peak shaving techniques. Demand side management techniques found in literature are overviewed and a novel electricity demand control technique using real-time pricing is proposed. Currently users have no means to change their power consumption to benefit the whole system. The proposed method consists of modern system identification and control that would enable user side load control. This would potentially balance demand side with supply side more effectively and would also reduce peak demand and make the whole system more efficient.",
"In this paper, we consider a smart power infrastructure, where several subscribers share a common energy source. Each subscriber is equipped with an energy consumption controller (ECC) unit as part of its smart meter. Each smart meter is connected to not only the power grid but also a communication infrastructure such as a local area network. This allows two-way communication among smart meters. Considering the importance of energy pricing as an essential tool to develop efficient demand side management strategies, we propose a novel real-time pricing algorithm for the future smart grid. We focus on the interactions between the smart meters and the energy provider through the exchange of control messages which contain subscribers' energy consumption and the real-time price information. First, we analytically model the subscribers' preferences and their energy consumption patterns in form of carefully selected utility functions based on concepts from microeconomics. Second, we propose a distributed algorithm which automatically manages the interactions among the ECC units at the smart meters and the energy provider. The algorithm finds the optimal energy consumption levels for each subscriber to maximize the aggregate utility of all subscribers in the system in a fair and efficient fashion. Finally, we show that the energy provider can encourage some desirable consumption patterns among the subscribers by means of the proposed real-time pricing interactions. Simulation results confirm that the proposed distributed algorithm can potentially benefit both subscribers and the energy provider."
]
} |
1710.09506 | 2766363395 | Energy storage is a crucial component of the smart grid, since it provides the ability to buffer transient fluctuations of the energy supply from renewable sources. Even without a load, energy storage systems experience a reduction of the stored energy through self-discharge. In some storage technologies, the rate of self-discharge can exceed 50 of the stored energy per day. In this paper, we investigate the self-discharge phenomenon in energy storage using a queueing system model, which we refer to as leakage queue. When the average net charge is positive, we discover that the leakage queue operates in one of two regimes: a leakage-dominated regime and a capacity-dominated regime. We find that in the leakage-dominated regime, the stored energy stabilizes at a point that is below the storage capacity. Under suitable independence assumptions for energy supply and demand, the stored energy in this regime closely follows a normal distribution. We present two methods for computing probabilities of underflow and overflow at a leakage queue. The methods are validated in a numerical example where the energy supply resembles a wind energy source. | More recently, a fluid-flow interpretation of queueing theory, known as network calculus' @cite_11 , has been applied to energy storage systems. A deterministic analysis has been used in @cite_6 to devise battery charging schedules that prevent batteries from running empty. Stochastic extensions of the network calculus have been applied to analyze energy storage in the presence of random, generally Markovian, energy sources @cite_18 @cite_27 @cite_7 . In these works, the evolution of the stored energy is expressed using a time-dependent function for the backlog in a finite-capacity queueing system from @cite_23 . Recent studies @cite_30 @cite_43 @cite_3 @cite_14 @cite_47 have improved the fidelity of energy storage models by considering factors such as limited charging and discharging rates, charging and discharging inefficiencies, as well as self-discharge. In @cite_43 , the self-discharge is modeled by a constant rate function, whereas the other works @cite_30 @cite_3 @cite_14 @cite_47 use a proportional leakage ratio as described in Sec. . Since queueing systems for energy storage systems with proportional self-discharge could not be solved analytically, the existing analyses resort to simulation and optimization methods. These provide numerical solutions, but do not easily give insight into parameter regimes and basic tradeoffs. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_6",
"@cite_3",
"@cite_43",
"@cite_27",
"@cite_23",
"@cite_47",
"@cite_11"
],
"mid": [
"2343332586",
"2083069380",
"2162334945",
"2949469986",
"1806809940",
"2253603123",
"2042618867",
"2129759371",
"8289417",
"2146738646",
""
],
"abstract": [
"Electric system operators rely on regulation services to match the total system supply to the total system load in quasi real-time. The regulation contractual framework requires that a regulation unit declares its regulation parameters at the beginning of the contract, the operator guarantees that the regulation signals will be within the range of these parameters, and the regulation unit is rewarded proportionally to what it declares and what it supplies. We study how this service can be provided by a unit with a non-ideal storage. We consider two broad classes of storage technologies characterized by different state of charge evolution equations, namely batteries and flywheels. We first focus on a single contract, and obtain formulas for the upward and downward regulation parameters that a unit with either a battery or a flywheel should declare to the operator to maximize its reward. We then focus on a multiple contract setting and show how to analytically quantify the reward that such a unit could obtain in successive contracts. We quantify this reward using bounds and expectation, and compare our analytical results with those obtained from a dataset of real-world regulation signals. Finally, we provide engineering insights by comparing different storage technologies in terms of potential rewards for different contract durations and parameters.",
"Although modern society is critically reliant on power grids, modern power grids are subject to unavoidable outages. The situation in developing countries is even worse, with frequent load shedding lasting several hours a day due to a large power supply-demand gap. A common solution for residences is, therefore, to back up grid power with local generation from a diesel generator (genset). To reduce carbon emissions, a hybrid battery-genset is preferable to a genset-only system. Designing such a hybrid system is complicated by the tradeoff between cost and carbon emission. Toward the analysis of such a hybrid system, we first compute the minimum battery size required for eliminating the use of a genset, while guaranteeing a target loss of power probability for an unreliable grid. We then compute the minimum required battery for a given genset and a target-allowable carbon footprint. Drawing on recent results, we model both problems as buffer sizing problems that can be addressed using stochastic network calculus. Specifically, a numerical study shows that, for a neighborhood of 100 homes, we are able to estimate the storage required for both the problems with a fairly small margin of error compared to the empirically computed optimal value.",
"Energy storage - in the form of UPS units - in a datacenter has been primarily used to fail-over to diesel generators upon power outages. There has been recent interest in using these Energy Storage Devices (ESDs) for demand-response (DR) to either shift peak demand away from high tariff periods, or to shave demand allowing aggressive under-provisioning of the power infrastructure. All such prior work has only considered a single specific type of ESD (typically re-chargeable lead-acid batteries), and has only employed them at a single level of the power delivery network. Continuing technological advances have provided us a plethora of competitive ESD options ranging from ultra-capacitors, to different kinds of batteries, flywheels and even compressed air-based storage. These ESDs offer very different trade-offs between their power and energy costs, densities, lifetimes, and energy efficiency, among other factors, suggesting that employing hybrid combinations of these may allow more effective DR than with a single technology. Furthermore, ESDs can be placed at different, and possibly multiple, levels of the power delivery hierarchy with different associated trade-offs. To our knowledge, no prior work has studied the extensive design space involving multiple ESD technology provisioning and placement options. This paper intends to fill this critical void, by presenting a theoretical framework for capturing important characteristics of different ESD technologies, the trade-offs of placing them at different levels of the power hierarchy, and quantifying the resulting cost-benefit trade-offs as a function of workload properties.",
"We consider the performance modeling and evaluation of network systems powered with renewable energy sources such as solar and wind energy. Such energy sources largely depend on environmental conditions, which are hard to predict accurately. As such, it may only make sense to require the network systems to support a soft quality of service (QoS) guarantee, i.e., to guarantee a service requirement with a certain high probability. In this paper, we intend to build a solid mathematical foundation to help better understand the stochastic energy constraint and the inherent correlation between QoS and the uncertain energy supply. We utilize a calculus approach to model the cumulative amount of charged energy and the cumulative amount of consumed energy. We derive upper and lower bounds on the remaining energy level based on a stochastic energy charging rate and a stochastic energy discharging rate. By building the bridge between energy consumption and task execution (i.e., service), we study the QoS guarantee under the constraint of uncertain energy sources. We further show how performance bounds can be improved if some strong assumptions can be made.",
"We consider an electricity consumer equipped with a perfect battery, who needs to satisfy a non-elastic load, subject to external control signals. The control imposes a time-varying upper-bound on the instantaneous energy consumption (this is called \"Demand-Response via quantity\"). The consumer defines a charging schedule for the battery. We say that a schedule is feasible if it successfully absorbs the effects of service reduction and achieves the satisfiability of the load (making use of the battery). Our contribution is twofold. (1) We provide explicit necessary and sufficient conditions for the load, the control, and the battery, which ensure the existence of a feasible battery charging schedule. Furthermore, we show that whenever a feasible schedule exists, we can explicitly define an online (causal) feasible schedule. (2) For a given arrival curve characterizing the load and a given service curve characterizing the control, we compute a sufficient battery size that ensures existence of an online feasible schedule. For an arrival curve determined from a real measured trace, we numerically characterize the sufficient battery size for various types of service curves.",
"The wide range of performance characteristics of storage technologies motivates the use of a hybrid energy storage system (HESS) that combines the best features of multiple technologies. However, HESS design is complex, in that it involves the choice of storage technologies, the sizing of each storage element, and deciding when to charge and discharge each underlying storage element ( operating strategy ). We formulate the problem of jointly optimizing the sizing and the operating strategy of an HESS that can be used for a large class of applications and storage technologies. Instead of a single set of storage element sizes, our approach determines the Pareto-optimal frontier of the sizes of the storage elements along with the corresponding optimal operating strategy. Thus, as long as the performance objective of a storage application (such as an off-grid microgrid) can be expressed as a linear combination of the underlying storage sizes, the optimal vector of storage sizes falls somewhere on this frontier. We present two case studies to illustrate our approach, demonstrating that a single storage technology is sometimes inadequate to meet application requirements, unlike an HESS designed using our approach. We also find simple, near-optimal, and practical operating strategies for these case studies, which allows us to gain several new engineering insights.",
"",
"Renewable energy such as solar and wind generation will constitute an important part of the future grid. As the availability of renewable sources may not match the load, energy storage is essential for grid stability. In this paper we investigate the feasibility of integrating solar photovoltaic (PV) panels and wind turbines into the grid by also accounting for energy storage. To deal with the fluctuation in both the power supply and demand, we extend and apply stochastic network calculus to analyze the power supply reliability with various renewable energy configurations. To illustrate the validity of the model, we conduct a case study for the integration of renewable energy sources into the power system of an island off the coast of Southern California. In particular, we asses the power supply reliability in terms of the average Fraction of Time that energy is Not-Served (FTNS).",
"",
"In an isolated power grid or a micro-grid with a small carbon footprint, the penetration of renewable energy is usually high. In such power grids, energy storage is important to guarantee an uninterrupted and stable power supply for end users. Different types of energy storage have different characteristics, including their round-trip efficiency, power and energy rating, self-discharge, and investment and maintenance costs. In addition, the load characteristics and availability of different types of renewable energy sources vary in different geographic regions and at different times of year. Therefore joint capacity optimization for multiple types of energy storage and generation is important when designing this type of power systems. In this paper, we formulate a cost minimization problem for storage and generation planning, considering both the initial investment cost and operational maintenance cost, and propose a distributed optimization framework to overcome the difficulty brought about by the large size of the optimization problem. The results will help in making decisions on energy storage and generation capacity planning in future decentralized power grids with high renewable penetrations.",
""
]
} |
1710.09323 | 2949973582 | This extended paper presents 1) a novel hierarchy and recursion extension to the process tree model; and 2) the first, recursion aware process model discovery technique that leverages hierarchical information in event logs, typically available for software systems. This technique allows us to analyze the operational processes of software systems under real-life conditions at multiple levels of granularity. The work can be positioned in-between reverse engineering and process mining. An implementation of the proposed approach is available as a ProM plugin. Experimental results based on real-life (software) event logs demonstrate the feasibility and usefulness of the approach and show the huge potential to speed up discovery by exploiting the available hierarchy. | In the area of dynamic analysis, the focus is on obtaining a rich but flat control flow model. A lot of effort has been put in enriching models with more accurate choice and loop information, guards, and other predicates. However, notions of recursion or preciseness of models, or application of these models, like for analysis, seems to be largely ignored. The few approaches that do touch upon performance or frequency analysis ( @cite_23 @cite_28 @cite_38 ) do so with models lacking formal semantics or model quality guarantees. | {
"cite_N": [
"@cite_28",
"@cite_38",
"@cite_23"
],
"mid": [
"",
"1565011018",
"2147793724"
],
"abstract": [
"",
"Process discovery algorithms typically aim at discovering process models from event logs that best describe the recorded behavior. Often, the quality of a process discovery algorithm is measured by quantifying to what extent the resulting model can reproduce the behavior in the log, i.e. replay fitness. At the same time, there are many other metrics that compare a model with recorded behavior in terms of the precision of the model and the extent to which the model generalizes the behavior in the log. Furthermore, several metrics exist to measure the complexity of a model irrespective of the log.",
"This paper presents an approach for recovering application-level views of the interaction behaviors between systems that communicate via networks. Rather than illustrating a single behavior, a sequence diagram is constructed that describes the characteristics of multiple combined behaviors. The approach has several properties that make it particularly suitable for analyzing heterogeneous systems. First, since the interactions are retrieved from observing the network communication, our technique can be applied to systems that are implemented in different languages and run on different platforms. Second, it does not require the availability or modification of source code. After the behaviors are extracted, we employ methods to merge multiple observed behaviors to a single sequence diagram that illustrates the overall behavior.The contributions of this paper are a technique for observing and processing the network communication to derive a model of the behavior. Furthermore, it describes a series of model transformations to construct a sequence diagram view of all observed behaviors."
]
} |
1710.09180 | 2765678594 | This work is an endeavor to develop a deep learning methodology for automated anatomical labeling of a given region of interest (ROI) in brain computed tomography (CT) scans. We combine both local and global context to obtain a representation of the ROI. We then use Relation Networks (RNs) to predict the corresponding anatomy of the ROI based on its relationship score for each class. Further, we propose a novel strategy employing nearest neighbors approach for training RNs. We train RNs to learn the relationship of the target ROI with the joint representation of its nearest neighbors in each class instead of all data-points in each class. The proposed strategy leads to better training of RNs along with increased performance as compared to training baseline RN network. | Relation Networks have been proposed and used for relational reasoning @cite_10 @cite_9 , where the deep learning model is required to extract relations between different objects for prediction. However, recently @cite_7 have used RNs for few shot learning of multi-class classification task. Our approach of using RNs is, therefore, more similar to that proposed by @cite_7 . | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_7"
],
"mid": [
"2624614404",
"2950033033",
"2745490399"
],
"abstract": [
"Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.",
"Our world can be succinctly and compactly described as structured scenes of objects and relations. A typical room, for example, contains salient objects such as tables, chairs and books, and these objects typically relate to each other by their underlying causes and semantics. This gives rise to correlated features, such as position, function and shape. Humans exploit knowledge of objects and their relations for learning a wide spectrum of tasks, and more generally when learning the structure underlying observed data. In this work, we introduce relation networks (RNs) - a general purpose neural network architecture for object-relation reasoning. We show that RNs are capable of learning object relations from scene description data. Furthermore, we show that RNs can act as a bottleneck that induces the factorization of objects from entangled scene description inputs, and from distributed deep representations of scene images provided by a variational autoencoder. The model can also be used in conjunction with differentiable memory mechanisms for implicit relation discovery in one-shot learning tasks. Our results suggest that relation networks are a potentially powerful architecture for solving a variety of problems that require object relation reasoning.",
"The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used."
]
} |
1710.09280 | 2963530248 | Routing is a challenging problem for wireless ad hoc networks, especially when the nodes are mobile and spread so widely that in most cases multiple hops are needed to route a message from one node to another. In fact, it is known that any online routing protocol has a poor performance in the worst case, in a sense that there is a distribution of nodes resulting in bad routing paths for that protocol, even if the nodes know their geographic positions and the geographic position of the destination of a message is known. The reason for that is that radio holes in the ad hoc network may require messages to take long detours in order to get to a destination, which are hard to find in an online fashion. | To answer this question, we consider a hybrid communication model. A Hybrid Communication Network has been introduced in different contexts @cite_9 @cite_13 . To the best of our knowledge, we are the first ones that consider these types of networks for the purpose of finding paths in ad hoc networks. | {
"cite_N": [
"@cite_9",
"@cite_13"
],
"mid": [
"1991814281",
"1714740914"
],
"abstract": [
"In this article, some considerations are presented about the way several well-known industrial networks (based on both fieldbus and industrial Ethernet solutions) can be practically extended with wireless subnetworks that rely on popular technologies, such as IEEE 802.11 and 802.15.4. This results in hybrid networks, which are able to combine the advantages of both wired and wireless solutions. In particular, advantages and drawbacks of several interconnection techniques are highlighted and, depending on the wired networks specifically taken into account, some hybrid configurations that are able to cope in a satisfactory way with the tight timing requirements often imposed by industrial control systems are suggested.",
"The present paper outlines features of communications via fibre optic cable, satellite transponders, microwave links and an integrated hybrid system using a mix of the above media. Hybrid communications will play a major role in all transactions of various countries and particularly in the monitoring and control of power systems. The paper also outlines the telemetry systems and sensors which need to be integrated with power distribution systems for online information flow to a central command post for preventive and corrective actions."
]
} |
1710.08802 | 2765820397 | Model predictive control (MPC) is a computationally demanding control technique that allows dealing with multiple-input and multiple-output systems while handling constraints in a systematic way. The necessity of solving an optimization problem at every sampling instant often 1) limits the application scope to slow dynamical systems and or 2) results in expensive computational hardware implementations. Traditional MPC design is based on the manual tuning of software and computational hardware design parameters, which leads to suboptimal implementations. This brief proposes a framework for automating the MPC software and computational hardware codesign while achieving an optimal tradeoff between computational resource usage and controller performance. The proposed approach is based on using a biobjective optimization algorithm, namely BiMADS. Two test studies are considered: a central processing unit and field-programmable gate array implementations of fast gradient-based MPC. Numerical experiments show that the optimization-based design outperforms Latin hypercube sampling, a statistical sampling-based design exploration technique. | Model predictive controller design is a multidisciplinary problem that involves tuning several coupled design parameters. Traditionally MPC controllers were tuned manually, with a trial and error approach, which cannot be considered as a viable option for most present-day applications, considering the number of design parameters and design evaluation time @cite_29 . Moreover, manual tuning often requires understanding the nature of the controlled dynamical system and MPC controller with the underlying optimization solver. Available tuning guidelines for model predictive control, including heuristic and systematic (but not automatic) approaches, are reviewed in @cite_15 . Note that only high level optimal control problem parameters (e.g. horizon length, weights on states inputs) are considered in the review paper, without regard to solving the underlying optimization problem. | {
"cite_N": [
"@cite_29",
"@cite_15"
],
"mid": [
"2556282465",
"2013610941"
],
"abstract": [
"Designers of industrial embedded control systems, such as automotive, aerospace, and medical-device control systems, use verification and testing activities to increase their confidence that performance requirements and safety standards are met. Since testing and verification tasks account for a significant portion of the development effort, increasing the efficiency of testing and verification will have a significant impact on the total development cost. Existing and emerging simulation-based approaches offer improved means of testing and, in some cases, verifying the correctness of control system designs.",
"This paper provides a review of the available tuning guidelines for model predictive control, from theoretical and practical perspectives. It covers both popular dynamic matrix control and generalized predictive control implementations, along with the more general state-space representation of model predictive control and other more specialized types, such as max-plus-linear model predictive control. Additionally, a section on state estimation and Kalman filtering is included along with auto (self) tuning. Tuning methods covered range from equations derived from simulation approximation of the process dynamics to bounds on the region of acceptable tuning parameter values."
]
} |
1710.08802 | 2765820397 | Model predictive control (MPC) is a computationally demanding control technique that allows dealing with multiple-input and multiple-output systems while handling constraints in a systematic way. The necessity of solving an optimization problem at every sampling instant often 1) limits the application scope to slow dynamical systems and or 2) results in expensive computational hardware implementations. Traditional MPC design is based on the manual tuning of software and computational hardware design parameters, which leads to suboptimal implementations. This brief proposes a framework for automating the MPC software and computational hardware codesign while achieving an optimal tradeoff between computational resource usage and controller performance. The proposed approach is based on using a biobjective optimization algorithm, namely BiMADS. Two test studies are considered: a central processing unit and field-programmable gate array implementations of fast gradient-based MPC. Numerical experiments show that the optimization-based design outperforms Latin hypercube sampling, a statistical sampling-based design exploration technique. | The paper is organised as follows. Following the introduction, the target computational platform is described in . An approach for formulating predictive controller design as an optimization problem is presented in . Possible ways of formalising design objectives and constraints are discussed within the section. Following that, reviews existing algorithms for solving the resulting optimization problem and justifies using the BiMADS algorithm @cite_25 for solving MPC design problems. Two case studies are considered in : design of CPU-based and FPGA-based implementations of a fast gradient-based predictive controller. concludes the paper. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2083135504"
],
"abstract": [
"This work deals with bound constrained multiobjective optimization (MOP) of nonsmooth functions for problems where the structure of the objective functions either cannot be exploited, or are absent. Typical situations arise when the functions are computed as the result of a computer simulation. We first present definitions and optimality conditions as well as two families of single-objective formulations of MOP. Next, we propose a new algorithm called for the biobjective optimization (BOP) problem (i.e., MOP with two objective functions). The property that Pareto points may be ordered in BOP and not in MOP is exploited by our algorithm. generates an approximation of the Pareto front by solving a series of single-objective formulations of BOP. These single-objective problems are solved using the recent (mesh adaptive direct search) algorithm for nonsmooth optimization. The Pareto front approximation is shown to satisfy some first order necessary optimality conditions based on the Clarke calculus. Finally, is tested on problems from the literature designed to illustrate specific difficulties encountered in biobjective optimization, such as a nonconvex or disjoint Pareto front, local Pareto fronts, or a nonuniform Pareto front."
]
} |
1710.08843 | 2767043397 | This paper address the challenges encountered by developers when deploying a distributed decision-making behavior on heterogeneous robotic systems. Many applications benefit from the use of multiple robots, but their scalability and applicability are fundamentally limited if relying on a central control station. Getting beyond the centralized approach can increase the complexity of the embedded intelligence, the sensitivity to the network topology, and render the deployment on physical robots tedious and error-prone. By integrating the swarm-oriented programming language Buzz with the standard environment of ROS, this work demonstrates that behaviors requiring distributed consensus can be successfully deployed in practice. From simulation to the field, the behavioral script stays untouched and applicable to heterogeneous robot teams. We present the software structure of our solution as well as the swarm-oriented paradigms required from Buzz to implement a robust generic consensus strategy. We show the applicability of our solution with simulations and experiments with heterogeneous ground-and-air robotic teams. | Swarms of UAVs are challenging to implement, but their high potential to be robust, resilient and flexible @cite_20 motivates a number of robotics laboratories. For instance, the Ecole Polytechnique Federale de Lausanne Laboratory of Intelligent Systems @cite_21 introduced fixed-wing UAVs to demonstrate flocking @cite_22 with platform specific programming. Flocking is part of basic swarm behaviors that do not require formal consensus over the group @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_22",
"@cite_20"
],
"mid": [
"1060861436",
"",
"2150312211",
"2118331730"
],
"abstract": [
"Swarm intelligence principles have been widely studied and applied to a number of different tasks where a group of autonomous robots is used to solve a problem with a distributed approach, i.e. without central coordination. A survey of such tasks is presented, illustrating various algorithms that have been used to tackle the challenges imposed by each task. Aggregation, flocking, foraging, object clustering and sorting, navigation, path formation, deployment, collaborative manipulation and task allocation problems are described in detail, and a high-level overview is provided for other swarm robotics tasks. For each of the main tasks, (1) swarm design methods are identified, (2) past works are divided in task-specific categories, and (3) mathematical models and performance metrics are described. Consistently with the swarm intelligence paradigm, the main focus is on studies characterized by distributed control, simplicity of individual robots and locality of sensing and communication. Distributed algorithms are shown to bring cooperation between agents, obtained in various forms and often without explicitly programming a cooperative behavior in the single robot controllers. Offline and online learning approaches are described, and some examples of past works utilizing these approaches are reviewed.",
"",
"The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle systems, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the \"animator.\" The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.",
"Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions."
]
} |
1710.08843 | 2767043397 | This paper address the challenges encountered by developers when deploying a distributed decision-making behavior on heterogeneous robotic systems. Many applications benefit from the use of multiple robots, but their scalability and applicability are fundamentally limited if relying on a central control station. Getting beyond the centralized approach can increase the complexity of the embedded intelligence, the sensitivity to the network topology, and render the deployment on physical robots tedious and error-prone. By integrating the swarm-oriented programming language Buzz with the standard environment of ROS, this work demonstrates that behaviors requiring distributed consensus can be successfully deployed in practice. From simulation to the field, the behavioral script stays untouched and applicable to heterogeneous robot teams. We present the software structure of our solution as well as the swarm-oriented paradigms required from Buzz to implement a robust generic consensus strategy. We show the applicability of our solution with simulations and experiments with heterogeneous ground-and-air robotic teams. | In an effort to standardize the swarm programming, Georgia Tech created the Robotarium, to test swarm behaviors remotely with desk robots @cite_16 . Their API is restricted to the specific custom robots of their system and do not include a generic consensus strategy. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2521943001"
],
"abstract": [
"This paper describes the Robotarium -- a remotely accessible, multi-robot research facility. The impetus behind the Robotarium is that multi-robot testbeds constitute an integral and essential part of the multi-robot research cycle, yet they are expensive, complex, and time-consuming to develop, operate, and maintain. These resource constraints, in turn, limit access for large groups of researchers and students, which is what the Robotarium is remedying by providing users with remote access to a state-of-the-art multi-robot test facility. This paper details the design and operation of the Robotarium and discusses the considerations one must take when making complex hardware remotely accessible. In particular, safety must be built into the system already at the design phase without overly constraining what coordinated control programs users can upload and execute, which calls for minimally invasive safety routines with provable performance guarantees."
]
} |
1710.08758 | 2766778662 | Motivated by the prevalence of multi-layer network structures in biological and social systems, we investigate the problem of counting the number of occurrences of (small) subgraphs or motifs in multi-layer graphs in which each layer of the graph has useful structural properties. Making use of existing meta-theorems, we focus on the parameterised complexity of motif-counting problems, giving conditions on the layers of a graph that yield fixed-parameter tractable algorithms for motif-counting in the overall graph. We give a dichotomy showing that, under some restricting assumptions, either the problem of counting the number of motifs is fixed-parameter tractable, or the corresponding decision problem is already W[1]-hard. | There is a rich literature concerning the (parameterised) complexity of finding and counting specific small pattern graphs in a large host graph. Several of the problems introduced in the seminal paper by Flum and Grohe on parameterised counting complexity @cite_29 are of this form, and very recently Curticapean, Dell and Marx @cite_17 gave a dichotomy for the parameterised complexity of counting so-called , based on the structure of the motifs under consideration. | {
"cite_N": [
"@cite_29",
"@cite_17"
],
"mid": [
"2137874581",
"2610844084"
],
"abstract": [
"We develop a parameterized complexity theory for counting problems. As the basis of this theory, we introduce a hierarchy of parameterized counting complexity classes #W @math , for @math , that corresponds to Downey and Fellows's W-hierarchy [R. G. Downey and M. R. Fellows, Parameterized Complexity, Springer-Verlag, New York, 1999] and we show that a few central W-completeness results for decision problems translate to #W-completeness results for the corresponding counting problems. Counting complexity gets interesting with problems whose decision version is tractable, but whose counting version is hard. Our main result states that counting cycles and paths of length k in both directed and undirected graphs, parameterized by k, is #W @math f(k) n^c @math f: N N @math 2^ O(k) n^ 2.376 $ algorithm for finding a cycle or path of length k [N. Alon, R. Yuster, and U. Zwick, J. ACM, 42 (1995), pp. 844--856].",
"We introduce graph motif parameters, a class of graph parameters that depend only on the frequencies of constant-size induced subgraphs. Classical works by Lovasz show that many interesting quantities have this form, including, for fixed graphs H, the number of H-copies (induced or not) in an input graph G, and the number of homomorphisms from H to G. We use the framework of graph motif parameters to obtain faster algorithms for counting subgraph copies of fixed graphs H in host graphs G. More precisely, for graphs H on k edges, we show how to count subgraph copies of H in time kO(k)· n0.174k + o(k) by a surprisingly simple algorithm. This improves upon previously known running times, such as O(n0.91k + c) time for k-edge matchings or O(n0.46k + c) time for k-cycles. Furthermore, we prove a general complexity dichotomy for evaluating graph motif parameters: Given a class C of such parameters, we consider the problem of evaluating f e C on input graphs G, parameterized by the number of induced subgraphs that f depends upon. For every recursively enumerable class C, we prove the above problem to be either FPT or #W[1]-hard, with an explicit dichotomy criterion. This allows us to recover known dichotomies for counting subgraphs, induced subgraphs, and homomorphisms in a uniform and simplified way, together with improved lower bounds. Finally, we extend graph motif parameters to colored subgraphs and prove a complexity trichotomy: For vertex-colored graphs H and G, where H is from a fixed class of graphs, we want to count color-preserving H-copies in G. We show that this problem is either polynomial-time solvable or FPT or #W[1]-hard, and that the FPT cases indeed need FPT time under reasonable assumptions."
]
} |
1710.08729 | 2766903722 | In this work, we addressed the issue of applying a stochastic classifier and a local, fuzzy confusion matrix under the framework of multi-label classification. We proposed a novel solution to the problem of correcting label pairwise ensembles. The main step of the correction procedure is to compute classifier- specific competence and cross-competence measures, which estimates error pattern of the underlying classifier. We considered two improvements of the method of obtaining confusion matrices. The first one is aimed to deal with imbalanced labels. The other utilizes double labelled instances which are usually removed during the pairwise transformation. The proposed methods were evaluated using 29 benchmark datasets. In order to assess the efficiency of the introduced models, they were compared against 1 state-of-the-art approach and the correction scheme based on the original method of confusion matrix estimation. The comparison was performed using four different multi-label evaluation measures: macro and micro-averaged F1 loss, zero-one loss and Hamming loss. Additionally, we investigated relations between classification quality, which is expressed in terms of different quality criteria, and characteristics of multi-label datasets such as average imbalance ratio or label density. The experimental study reveals that the correction approaches significantly outperforms the reference method only in terms of zero-one loss. | Multi-label classification algorithms can be broadly divided into two main groups: set transformation algorithms and algorithm adaptation approaches @cite_27 @cite_53 . Algorithm adaptation methods are based upon existing multi-class methods which are tailored to solve multi-label classification problem directly. A great example of such methods are Multi-label back propagation method for artificial neuron networks @cite_32 , multi label KNN algorithm @cite_33 , the ML Hoeffding trees @cite_30 , the Structured SVM approach @cite_19 . Another branch of research that falls under the algorithm adaptation framework is to adapt known deep learning algorithms to solve multi-label task @cite_12 . | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_53",
"@cite_32",
"@cite_19",
"@cite_27",
"@cite_12"
],
"mid": [
"2038624061",
"2052684427",
"1491576965",
"2119466907",
"2096748209",
"55768394",
"1567302070"
],
"abstract": [
"Many challenging real world problems involve multi-label data streams. Efficient methods exist for multi-label classification in non-streaming scenarios. However, learning in evolving streaming scenarios is more challenging, as classifiers must be able to deal with huge numbers of examples and to adapt to change using limited time and memory while being ready to predict at any point. This paper proposes a new experimental framework for learning and evaluating on multi-label data streams, and uses it to study the performance of various methods. From this study, we develop a multi-label Hoeffding tree with multi-label classifiers at the leaves. We show empirically that this method is well suited to this challenging task. Using our new framework, which allows us to generate realistic multi-label data streams with concept drift (as well as real data), we compare with a selection of baseline methods, as well as new learning methods from the literature, and show that our Hoeffding tree method achieves fast and more accurate performance.",
"Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named ML-KNN is presented, which is derived from the traditional K-nearest neighbor (KNN) algorithm. In detail, for each unseen instance, its K nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that ML-KNN achieves superior performance to some well-established multi-label learning algorithms.",
"Multi-label learning is quite a recent supervised learning paradigm. Owing to its capabilities to improve performance in problems where a pattern may have more than one associated class, it has attracted the attention of researchers, producing an increasing number of publications. This study presents an up-to-date overview about multi-label learning with the aim of sorting and describing the main approaches developed till now. The formal definition of the paradigm, the analysis of its impact on the literature, its main applications, works developed, pitfalls and guidelines, and ongoing research are presented. WIREs Data Mining Knowl Discov 2014, 4:411-444. doi: 10.1002 widm.1139",
"In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., backpropagation for multilabel learning, is proposed. It is derived from the popular backpropagation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms",
"Multilabel classification (ML) aims to assign a set of labels to an instance. This generalization of multiclass classification yields to the redefinition of loss functions and the learning tasks become harder. The objective of this paper is to gain insights into the relations of optimization aims and some of the most popular performance measures: subset (or 0 1), Hamming, and the example-based F-measure. To make a fair comparison, we implemented three ML learners for optimizing explicitly each one of these measures in a common framework. This can be done considering a subset of labels as a structured output. Then, we use structured output support vector machines tailored to optimize a given loss function. The paper includes an exhaustive experimental comparison. The conclusion is that in most cases, the optimization of the Hamming loss produces the best or competitive scores. This is a practical result since the Hamming loss can be minimized using a bunch of binary classifiers, one for each label separately, and therefore, it is a scalable and fast method to learn ML tasks. Additionally, we observe that in noise-free learning tasks optimizing the subset loss is the best option, but the differences are very small. We have also noticed that the biggest room for improvement can be found when the goal is to optimize an F-measure in noisy learning tasks.",
"A large body of research in supervised learning deals with the analysis of single-label data, where training examples are associated with a single label λ from a set of disjoint labels L. However, training examples in several application domains are often associated with a set of labels Y ⊆ L. Such data are called multi-label.",
"Convolutional Neural Network (CNN) has demonstrated promising performance in single-label image classification tasks. However, how CNN best copes with multi-label images still remains an open problem, mainly due to the complex underlying object layouts and insufficient multi-label training images. In this work, we propose a flexible deep CNN infrastructure, called Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment hypotheses are taken as the inputs, then a shared CNN is connected with each hypothesis, and finally the CNN output results from different hypotheses are aggregated with max pooling to produce the ultimate multi-label predictions. Some unique characteristics of this flexible deep CNN infrastructure include: 1) no ground-truth bounding box information is required for training; 2) the whole HCP infrastructure is robust to possibly noisy and or redundant hypotheses; 3) the shared CNN is flexible and can be well pre-trained with a large-scale single-label image dataset, e.g., ImageNet; and 4) it may naturally output multi-label prediction results. Experimental results on Pascal VOC 2007 and VOC 2012 multi-label image datasets well demonstrate the superiority of the proposed HCP infrastructure over other state-of-the-arts. In particular, the mAP reaches 90.5 by HCP only and 93.2 after the fusion with our complementary result in [12] based on hand-crafted features on the VOC 2012 dataset."
]
} |
1710.08729 | 2766903722 | In this work, we addressed the issue of applying a stochastic classifier and a local, fuzzy confusion matrix under the framework of multi-label classification. We proposed a novel solution to the problem of correcting label pairwise ensembles. The main step of the correction procedure is to compute classifier- specific competence and cross-competence measures, which estimates error pattern of the underlying classifier. We considered two improvements of the method of obtaining confusion matrices. The first one is aimed to deal with imbalanced labels. The other utilizes double labelled instances which are usually removed during the pairwise transformation. The proposed methods were evaluated using 29 benchmark datasets. In order to assess the efficiency of the introduced models, they were compared against 1 state-of-the-art approach and the correction scheme based on the original method of confusion matrix estimation. The comparison was performed using four different multi-label evaluation measures: macro and micro-averaged F1 loss, zero-one loss and Hamming loss. Additionally, we investigated relations between classification quality, which is expressed in terms of different quality criteria, and characteristics of multi-label datasets such as average imbalance ratio or label density. The experimental study reveals that the correction approaches significantly outperforms the reference method only in terms of zero-one loss. | On the other hand, methods from the former group transform original multi-label problem into a set of single-label classification problems and then combine their output into multi-label prediction. The simplest and most intuitive method from this group is the approach (known also as one-vs-rest approach) that decomposes multi-label classification into a set of binary classification problems. The method assigns an one-vs-rest classifier to each label. Although this method offers a great scalability, which is a desired property in domains with high number of labels @cite_56 , it also makes an unrealistic assumption that labels are independent. As a consequence, the approach offers acceptable classification quality, however it can easily be outperformed by algorithms that considers dependencies between labels @cite_22 @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_22",
"@cite_56"
],
"mid": [
"1600005011",
"1524416683",
""
],
"abstract": [
"In multi-label classification, each example can be associated with multiple labels simultaneously. The task of learning from multilabel data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multilabel learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. In this paper, we consider a simple approach which can be used to explore labels dependency aiming to accurately predict label combinations. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.",
"The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has been sidelined in the literature due to the perceived inadequacy of its label-independence assumption. Instead, most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, especially in terms of scalability to large datasets. We exemplify this with a novel chaining method that can model label correlations while maintaining acceptable computational complexity. Empirical evaluation over a broad range of multi-label datasets with a variety of evaluation metrics demonstrates the competitiveness of our chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.",
""
]
} |
1710.08729 | 2766903722 | In this work, we addressed the issue of applying a stochastic classifier and a local, fuzzy confusion matrix under the framework of multi-label classification. We proposed a novel solution to the problem of correcting label pairwise ensembles. The main step of the correction procedure is to compute classifier- specific competence and cross-competence measures, which estimates error pattern of the underlying classifier. We considered two improvements of the method of obtaining confusion matrices. The first one is aimed to deal with imbalanced labels. The other utilizes double labelled instances which are usually removed during the pairwise transformation. The proposed methods were evaluated using 29 benchmark datasets. In order to assess the efficiency of the introduced models, they were compared against 1 state-of-the-art approach and the correction scheme based on the original method of confusion matrix estimation. The comparison was performed using four different multi-label evaluation measures: macro and micro-averaged F1 loss, zero-one loss and Hamming loss. Additionally, we investigated relations between classification quality, which is expressed in terms of different quality criteria, and characteristics of multi-label datasets such as average imbalance ratio or label density. The experimental study reveals that the correction approaches significantly outperforms the reference method only in terms of zero-one loss. | Another technique of decomposition of multi-label classification task into a set of binary classifiers is the scheme. Under this framework, a binary classifier is trained for each pair of labels, and its task is to separate given labels. The outcome of the classifier is interpreted as an expression of pairwise preference in a label ranking @cite_13 . In other words, the classification outcome shows which label is preferred within the pair. Finally, outputs of binary models are collected and a final ranking is formed using chosen merging procedure @cite_1 . To convert the ranking into a binary response a thresholding procedure must be employed @cite_13 . | {
"cite_N": [
"@cite_1",
"@cite_13"
],
"mid": [
"2102705755",
"1990016442"
],
"abstract": [
"Preference learning is an emerging topic that appears in different guises in the recent literature. This work focuses on a particular learning scenario called label ranking, where the problem is to learn a mapping from instances to rankings over a finite number of labels. Our approach for learning such a mapping, called ranking by pairwise comparison (RPC), first induces a binary preference relation from suitable training data using a natural extension of pairwise classification. A ranking is then derived from the preference relation thus obtained by means of a ranking procedure, whereby different ranking methods can be used for minimizing different loss functions. In particular, we show that a simple (weighted) voting strategy minimizes risk with respect to the well-known Spearman rank correlation. We compare RPC to existing label ranking methods, which are based on scoring individual labels instead of comparing pairs of labels. Both empirically and theoretically, it is shown that RPC is superior in terms of computational efficiency, and at least competitive in terms of accuracy.",
"We study the problem of label ranking, a machine learning task that consists of inducing a mapping from instances to rankings over a finite number of labels. Our learning method, referred to as ranking by pairwise comparison (RPC), first induces pairwise order relations (preferences) from suitable training data, using a natural extension of so-called pairwise classification. A ranking is then derived from a set of such relations by means of a ranking procedure. In this paper, we first elaborate on a key advantage of such a decomposition, namely the fact that it allows the learner to adapt to different loss functions without re-training, by using different ranking procedures on the same predicted order relations. In this regard, we distinguish between two types of errors, called, respectively, ranking error and position error. Focusing on the position error, which has received less attention so far, we then propose a ranking procedure called ranking through iterated choice as well as an efficient pairwise implementation thereof. Apart from a theoretical justification of this procedure, we offer empirical evidence in favor of its superior performance as a risk minimizer for the position error."
]
} |
1710.08377 | 2765507192 | We study transfer learning in convolutional network architectures applied to the task of recognizing audio, such as environmental sound events and speech commands. Our key finding is that not only is it possible to transfer representations from an unrelated task like environmental sound classification to a voice-focused task like speech command recognition, but also that doing so improves accuracies significantly. We also investigate the effect of increased model capacity for transfer learning audio, by first validating known results from the field of Computer Vision of achieving better accuracies with increasingly deeper networks on two audio datasets: UrbanSound8k and the newly released Google Speech Commands dataset. Then we propose a simple multiscale input representation using dilated convolutions and show that it is able to aggregate larger contexts and increase classification performance. Further, the models trained using a combination of transfer learning and multiscale input representations need only 40 of the training data to achieve similar accuracies as a freshly trained model with 100 of the training data. Finally, we demonstrate a positive interaction effect for the multiscale input and transfer learning, making a case for the joint application of the two techniques. | Recently, @cite_4 and @cite_12 show Convolutional Neural Networks outperform the traditional methods. Despite the success, neither of these approaches have investigated extremely deep networks (100+ layers) on audio data, one of the goals of this paper. Relatedly, automatic tagging of music has seen several convolutional networks @cite_13 @cite_10 @cite_20 , but the networks have been relatively small compared the ones we investigate in this paper. In contrast, domains of audio classification have not seen the systematic application of the increasingly deeper convolutional network architectures that have immensely advanced Computer Vision @cite_3 @cite_8 @cite_22 . | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_10",
"@cite_3",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"2130640900",
"2511730936",
"2949650786",
"2414894569",
"2108598243",
"2059652044",
"",
"2592168896"
],
"abstract": [
"The paper considers the task of recognizing environmental sounds for the understanding of a scene or context surrounding an audio sensor. A variety of features have been proposed for audio recognition, including the popular Mel-frequency cepstral coefficients (MFCCs) which describe the audio spectral shape. Environmental sounds, such as chirpings of insects and sounds of rain which are typically noise-like with a broad flat spectrum, may include strong temporal domain signatures. However, only few temporal-domain features have been developed to characterize such diverse audio signals previously. Here, we perform an empirical feature analysis for audio environment characterization and propose to use the matching pursuit (MP) algorithm to obtain effective time-frequency features. The MP-based method utilizes a dictionary of atoms for feature selection, resulting in a flexible, intuitive and physically interpretable set of features. The MP-based feature is adopted to supplement the MFCC features to yield higher recognition accuracy for environmental sounds. Extensive experiments are conducted to demonstrate the effectiveness of these joint features for unstructured environmental sound classification, including listening tests to study human recognition capabilities. Our recognition system has shown to produce comparable performance as human listeners.",
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL .",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We present a content-based automatic music tagging algorithm using fully convolutional neural networks (FCNs). We evaluate different architectures consisting of 2D convolutional layers and subsampling layers only. In the experiments, we measure the AUC-ROC scores of the architectures with different complexities and input types using the MagnaTagATune dataset, where a 4-layer architecture shows state-of-the-art performance with mel-spectrogram input. Furthermore, we evaluated the performances of the architectures with varying the number of layers on a larger dataset (Million Song Dataset), and found that deeper models outperformed the 4-layer architecture. The experiments show that mel-spectrogram is an effective time-frequency representation for automatic tagging and that more complex models benefit from more training data.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.",
"",
"Music auto-tagging is often handled in a similar manner to image classification by regarding the two-dimensional audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstraction. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pretrained convolutional networks separately and aggregate them altogether giving a long audio clip. Finally, we put them into fully connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms the previous state-of-the-art methods on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning."
]
} |
1710.08377 | 2765507192 | We study transfer learning in convolutional network architectures applied to the task of recognizing audio, such as environmental sound events and speech commands. Our key finding is that not only is it possible to transfer representations from an unrelated task like environmental sound classification to a voice-focused task like speech command recognition, but also that doing so improves accuracies significantly. We also investigate the effect of increased model capacity for transfer learning audio, by first validating known results from the field of Computer Vision of achieving better accuracies with increasingly deeper networks on two audio datasets: UrbanSound8k and the newly released Google Speech Commands dataset. Then we propose a simple multiscale input representation using dilated convolutions and show that it is able to aggregate larger contexts and increase classification performance. Further, the models trained using a combination of transfer learning and multiscale input representations need only 40 of the training data to achieve similar accuracies as a freshly trained model with 100 of the training data. Finally, we demonstrate a positive interaction effect for the multiscale input and transfer learning, making a case for the joint application of the two techniques. | For audio classification, however, only recently did hershey17 ( hershey17 ) apply a 50-layer Residual Network (also called ResNets) @cite_8 and a 48-layer Inception-V3 @cite_21 network to classify the soundtracks of videos. We extend the audio classification task to models larger than a 100 layers. Our largest network is a 169 layers deep that we were able to train on a single NVIDIA Titan X GPU in 20 minutes on the UrbanSound8K dataset ( 8 hours of training data), without needing any specialized large-scale training infrastructure. | {
"cite_N": [
"@cite_21",
"@cite_8"
],
"mid": [
"2949605076",
"2949650786"
],
"abstract": [
"Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
} |
1710.08377 | 2765507192 | We study transfer learning in convolutional network architectures applied to the task of recognizing audio, such as environmental sound events and speech commands. Our key finding is that not only is it possible to transfer representations from an unrelated task like environmental sound classification to a voice-focused task like speech command recognition, but also that doing so improves accuracies significantly. We also investigate the effect of increased model capacity for transfer learning audio, by first validating known results from the field of Computer Vision of achieving better accuracies with increasingly deeper networks on two audio datasets: UrbanSound8k and the newly released Google Speech Commands dataset. Then we propose a simple multiscale input representation using dilated convolutions and show that it is able to aggregate larger contexts and increase classification performance. Further, the models trained using a combination of transfer learning and multiscale input representations need only 40 of the training data to achieve similar accuracies as a freshly trained model with 100 of the training data. Finally, we demonstrate a positive interaction effect for the multiscale input and transfer learning, making a case for the joint application of the two techniques. | Incorporating information from multiple scales is a challenge to convolutional networks, but recently dilated convolutions have shown efficacy in doing so for image classification tasks @cite_5 . Dilations were successfully used by oord16 ( oord16 ) for a text-to-speech task where the dilated convolution layers are applied hierarchically as a generative model of audio waveforms. Previous works on using multiscale spectrogram @cite_13 @cite_10 @cite_20 do not study the effect of multiscale convolutions on spectrogram features. To the best of our knowledge, this is the first work to systematically study the effect of multiple scales of dilated convolutions for audio classification. | {
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_20",
"@cite_10"
],
"mid": [
"2286929393",
"2059652044",
"2592168896",
"2414894569"
],
"abstract": [
"State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.",
"Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.",
"Music auto-tagging is often handled in a similar manner to image classification by regarding the two-dimensional audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstraction. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pretrained convolutional networks separately and aggregate them altogether giving a long audio clip. Finally, we put them into fully connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms the previous state-of-the-art methods on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning.",
"We present a content-based automatic music tagging algorithm using fully convolutional neural networks (FCNs). We evaluate different architectures consisting of 2D convolutional layers and subsampling layers only. In the experiments, we measure the AUC-ROC scores of the architectures with different complexities and input types using the MagnaTagATune dataset, where a 4-layer architecture shows state-of-the-art performance with mel-spectrogram input. Furthermore, we evaluated the performances of the architectures with varying the number of layers on a larger dataset (Million Song Dataset), and found that deeper models outperformed the 4-layer architecture. The experiments show that mel-spectrogram is an effective time-frequency representation for automatic tagging and that more complex models benefit from more training data."
]
} |
1710.08377 | 2765507192 | We study transfer learning in convolutional network architectures applied to the task of recognizing audio, such as environmental sound events and speech commands. Our key finding is that not only is it possible to transfer representations from an unrelated task like environmental sound classification to a voice-focused task like speech command recognition, but also that doing so improves accuracies significantly. We also investigate the effect of increased model capacity for transfer learning audio, by first validating known results from the field of Computer Vision of achieving better accuracies with increasingly deeper networks on two audio datasets: UrbanSound8k and the newly released Google Speech Commands dataset. Then we propose a simple multiscale input representation using dilated convolutions and show that it is able to aggregate larger contexts and increase classification performance. Further, the models trained using a combination of transfer learning and multiscale input representations need only 40 of the training data to achieve similar accuracies as a freshly trained model with 100 of the training data. Finally, we demonstrate a positive interaction effect for the multiscale input and transfer learning, making a case for the joint application of the two techniques. | A prominent use of convolutional neural networks in Computer Vision is to utilize transfer learning to classify new image categories @cite_0 . We believe this work is the first to investigate transfer learning for deep neural networks with audio inputs and show success on a completely different audio classification task (speech commands vs. environmental sounds). | {
"cite_N": [
"@cite_0"
],
"mid": [
"2952186574"
],
"abstract": [
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1710.08528 | 2766324716 | The emergence of social media as news sources has led to the rise of clickbait posts attempting to attract users to click on article links without informing them on the actual article content. This paper presents our efforts to create a clickbait detector inspired by fake news detection algorithms, and our submission to the Clickbait Challenge 2017. The detector is based almost exclusively on text-based features taken from previous work on clickbait detection, our own work on fake post detection, and features we designed specifically for the challenge. We use a two-level classification approach, combining the outputs of 65 first-level classifiers in a second-level feature vector. We present our exploratory results with individual features and their combinations, taken from the post text and the target article title, as well as feature selection. While our own blind tests with the dataset led to an F-score of 0.63, our final evaluation in the Challenge only achieved an F-score of 0.43. We explore the possible causes of this, and lay out potential future steps to achieve more successful results. | The problem of clickbait posts is relatively recent, yet active research is already developing around it. One of the first publications on the subject @cite_4 proposed a set of potential features that could be used for the task, without providing a quantitative analysis of their potential. The proposed features included lexical and semantic features in order to distinguish between high- vs low-quality text by analyzing their stylometry, and syntactic and pragmatic features to measure the emotional impact of headlines. Besides textual features, they also proposed image and user behavior analysis in order to extract information from the context of the post. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2248267741"
],
"abstract": [
"Tabloid journalism is often criticized for its propensity for exaggeration, sensationalization, scare-mongering, and otherwise producing misleading and low quality news. As the news has moved online, a new form of tabloidization has emerged: ?clickbaiting.? ?Clickbait? refers to ?content whose main purpose is to attract attention and encourage visitors to click on a link to a particular web page? [?clickbait,? n.d.] and has been implicated in the rapid spread of rumor and misinformation online. This paper examines potential methods for the automatic detection of clickbait as a form of deception. Methods for recognizing both textual and non-textual clickbaiting cues are surveyed, leading to the suggestion that a hybrid approach may yield best results."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.