aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1612.00212 | 2560722793 | Fully convolutional neural networks give accurate, per-pixel prediction for input images and have applications like semantic segmentation. However, a typical FCN usually requires lots of floating point computation and large run-time memory, which effectively limits its usability. We propose a method to train Bit Fully Convolution Network (BFCN), a fully convolutional neural network that has low bit-width weights and activations. Because most of its computation-intensive convolutions are accomplished between low bit-width numbers, a BFCN can be accelerated by an efficient bit-convolution implementation. On CPU, the dot product operation between two bit vectors can be reduced to bitwise operations and popcounts, which can offer much higher throughput than 32-bit multiplications and additions. To validate the effectiveness of BFCN, we conduct experiments on the PASCAL VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1 resources on FPGA, can achieve comparable performance as the 32-bit counterpart. | To utilize scene-parsing network in low-latency or real-time application, the computational complexity need to be significantly reduced. Some methods @cite_28 @cite_22 are proposed to reduce demand of computation resources of FCN by simplifying or redesigning the architecture of network. | {
"cite_N": [
"@cite_28",
"@cite_22"
],
"mid": [
"2419448466",
"2515655118"
],
"abstract": [
"The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.",
"This paper presents how we can achieve the state-of-the-art accuracy in multi-category object detection task while minimizing the computational cost by adapting and combining recent technical innovations. Following the common pipeline of \"CNN feature extraction + region proposal + RoI classification\", we mainly redesign the feature extraction part, since region proposal part is not computationally expensive and classification part can be efficiently compressed with common techniques like truncated SVD. Our design principle is \"less channels with more layers\" and adoption of some building blocks including concatenated ReLU, Inception, and HyperNet. The designed network is deep and thin and trained with the help of batch normalization, residual connections, and learning rate scheduling based on plateau detection. We obtained solid results on well-known object detection benchmarks: 83.8 mAP (mean average precision) on VOC2007 and 82.5 mAP on VOC2012 (2nd place), while taking only 750ms image on Intel i7-6700K CPU with a single core and 46ms image on NVIDIA Titan X GPU. Theoretically, our network requires only 12.3 of the computational cost compared to ResNet-101, the winner on VOC2012."
]
} |
1612.00089 | 2559467329 | Object-to-camera motion produces a variety of apparent motion patterns that significantly affect performance of short-term visual trackers. Despite being crucial for designing robust trackers, their influence is poorly explored in standard benchmarks due to weakly defined, biased and overlapping attribute annotations. In this paper we propose to go beyond pre-recorded benchmarks with post-hoc annotations by presenting an approach that utilizes omnidirectional videos to generate realistic, consistently annotated, short-term tracking scenarios with exactly parameterized motion patterns. We have created an evaluation system, constructed a fully annotated dataset of omnidirectional videos and the generators for typical motion patterns. We provide an in-depth analysis of major tracking paradigms which is complementary to the standard benchmarks and confirms the expressiveness of our evaluation approach. | The use of computer graphics in training and evaluation has recently been popularized in computer vision. @cite_27 propose online virtual worlds creation for drone tracking evaluation, but using only a single type of the object, without motion parametrization, produces a low level of realism. @cite_32 address the virtual worlds realism levels, ambient parametrization learning and performance evaluation, however, only for vehicle detection. | {
"cite_N": [
"@cite_27",
"@cite_32"
],
"mid": [
"2518876086",
"2949907962"
],
"abstract": [
"In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https: ivul.kaust.edu.sa Pages pub-benchmark-simulator-uav.aspx.).",
"Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking."
]
} |
1612.00086 | 2557568853 | We consider the problem of metric learning subject to a set of constraints on relative-distance comparisons between the data items. Such constraints are meant to reflect side-information that is not expressed directly in the feature vectors of the data items. The relative-distance constraints used in this work are particularly effective in expressing structures at finer level of detail than must-link (ML) and cannot-link (CL) constraints, which are most commonly used for semi-supervised clustering. Relative-distance constraints are thus useful in settings where providing an ML or a CL constraint is difficult because the granularity of the true clustering is unknown. Our main contribution is an efficient algorithm for learning a kernel matrix using the log determinant divergence --- a variant of the Bregman divergence --- subject to a set of relative-distance constraints. The learned kernel matrix can then be employed by many different kernel methods in a wide range of applications. In our experimental evaluations, we consider a semi-supervised clustering setting and show empirically that kernels found by our algorithm yield clusterings of higher quality than existing approaches that either use ML CL constraints or a different means to implement the supervision using relative comparisons. | As stated in the introduction, our work is based on metric learning . Most of the metric-learning literature, starting by the work of , aims at finding a Mahalanobis matrix subject to either ML CL or relative distance constraints. use ML CL constraints, while present a similar approach to handle relative comparisons. Metric learning often requires solving a semidefinite optimization problem. For instance, @cite_1 propose an online kernel learning method using stochastic gradient descent. However, the method requires an additional projection step onto the positive semidefinite cone. This problem becomes easier if Bregman divergence, in particular the log det divergence, is used to formulate the optimization problem. Such an approach was first used for metric learning by with ML CL constraints, and subsequently by likewise with ML CL constraints, as well as by with relative comparisons. Our algorithm also uses the log det divergence, and we extend the technique of to handle relative comparisons. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1573782941"
],
"abstract": [
"Learning a kernel matrix from relative comparison human feedback is an important problem with applications in collaborative filtering, object retrieval, and search. For learning a kernel over a large number of objects, existing methods face significant scalability issues inhibiting the application of these methods to settings where a kernel is learned in an online and timely fashion. In this paper we propose a novel framework called Efficient online Relative comparison Kernel LEarning (ERKLE), for efficiently learning the similarity of a large set of objects in an online manner. We learn a kernel from relative comparisons via stochastic gradient descent, one query response at a time, by taking advantage of the sparse and low-rank properties of the gradient to efficiently restrict the kernel to lie in the space of positive semidefinite matrices. In addition, we derive a passive-aggressive online update for minimally satisfying new relative comparisons as to not disrupt the influence of previously obtained comparisons. Experimentally, we demonstrate a considerable improvement in speed while obtaining improved or comparable accuracy compared to current methods in the online learning setting."
]
} |
1611.10160 | 2558975586 | Students often learn formal methods as part of a software engineering degree programme, without applying these formal methods outside of the specific module(s) dedicated to this subject. In particular, software engineering students often have to build a significant application program system in a substantial project at the end of their programme (in order to demonstrate the application of the things they have learned during the previous taught modules). Our experience shows that the majority of students do not use formal methods in this project work. We report on feedback from the minority of students who did choose to use formal methods in their projects, and give examples of where this was a help and where it was a hindrance. | The need for scientific evidence supporting the importance of teaching formal methods to software engineers was highlighted by Henderson @cite_2 : | {
"cite_N": [
"@cite_2"
],
"mid": [
"1987064735"
],
"abstract": [
"Discrete mathematics, especially logic, plays an implicit role in software engineering similar to the role of continuous mathematics in traditional physically based engineering disciplines."
]
} |
1611.10181 | 2963302548 | Context: Software quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty. Objective: The qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model. Method: The four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed. Results: The approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions. Conclusion: The approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity. | The basic idea to use Bayesian networks for assessing and predicting software quality has been developed foremost by Fenton, Neil, and Littlewood. They introduced Bayesian networks as a tool in this area and applied it in various contexts related to software quality. In @cite_11 they formulate a critique on current defect prediction models and suggest to use Bayesian networks. Other researchers also used Bayesian networks for software quality prediction similarly @cite_16 @cite_27 | {
"cite_N": [
"@cite_27",
"@cite_16",
"@cite_11"
],
"mid": [
"2063473175",
"2024676793",
"2101728371"
],
"abstract": [
"Bayesian network (BN) models can be used to predict the level of fault injection in the different phases of the software development process. This paper describes the techniques used by Motorola to develop BN fault prediction models using product, usage and process information. It also includes a procedure created to validate and calibrate the BN models. Statistical techniques are applied that provide a measurement of the quality of the predictions made by the models, and directions on how the models could be improved. The validation method described in this paper considers two alternative ways to produce an improved network. The first is based mainly on modifying the ranges of the existing nodes, adding interdependencies between them, and varying the weight values associated with each of the nodes that are used as inputs to the intermediate nodes of the BN model. The second possibility uses linear regression and principal component analysis to build the intermediate and output nodes of the network. The paper closes with some encouraging results, and outlines a number of unresolved questions. Copyright © 2006 John Wiley & Sons, Ltd.",
"Recently, software development projects have been required to produce highly reliable systems within a short period and with low cost. In such situation, software quality prediction helps to confirm that the software product satisfies required quality expectations. In this paper, by using a Bayesian Belief Network (BBN), we try to construct a prediction model based on relationships elicited from the embedded software development process. On the one hand, according to a characteristic of embedded software development, we especially propose to classify test and debug activities into two distinct activities on software and hardware. Then we call the proposed model \"the BBN for an embedded software development process\". On the other hand, we define \"the BBN for a general software development process\" to be a model which does not consider this classification of activity, but rather, merges them into a single activity. Finally, we conducted experimental evaluations by applying these two BBNs to actual project data. As the results of our experiments show, the BBN for the embedded software development process is superior to the BBN for the general development process and is applicable effectively for effective practical use.",
"Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the \"quality\" of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of \"software decomposition\" in order to test hypotheses about defect introduction and help construct a better science of software engineering."
]
} |
1611.10181 | 2963302548 | Context: Software quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty. Objective: The qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model. Method: The four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed. Results: The approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions. Conclusion: The approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity. | Beaver, Schiavone, and Berrios @cite_18 also used a Bayesian network to predict software quality including diverse factors such as team skill and process maturity. In his thesis @cite_10 , Beaver even compared the approach to neural networks and Least Squares regressions that both were outperformed by the Bayesian network. However, they did not rely on a structured quality model as in our approach. | {
"cite_N": [
"@cite_18",
"@cite_10"
],
"mid": [
"2128589942",
"2466294644"
],
"abstract": [
"The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian belief networks, a machine learning method. This research presents a Bayesian network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.",
"Software practitioners lack a consistent approach to assessing and predicting quality within their products. This research proposes a software quality model that accounts for the influences of development team skill experience, process maturity, and problem complexity throughout the software engineering life cycle. The model is structured using Bayesian Belief Networks and, unlike previous efforts, uses widely-accepted software engineering standards and in-use industry techniques to quantify the indicators and measures of software quality. Data from 28 software engineering projects was acquired for this study, and was used for validation and comparison of the presented software quality models. Three Bayesian model structures are explored and the structure with the highest performance in terms of accuracy of fit and predictive validity is reported. In addition, the Bayesian Belief Networks are compared to both Least Squares Regression and Neural Networks in order to identify the technique is best suited to modeling software product quality. The results indicate that Bayesian Belief Networks outperform both Least Squares Regression and Neural Networks in terms of producing modeled software quality variables that fit the distribution of actual software quality values, and in accurately forecasting 25 different indicators of software quality. Between the Bayesian model structures, the simplest structure, which relates software quality variables to their correlated causal factors, was found to be the most effective in modeling software quality. In addition, the results reveal that the collective skill and experience of the development team, over process maturity or problem complexity, has the most significant impact on the quality of software products."
]
} |
1611.10181 | 2963302548 | Context: Software quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty. Objective: The qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model. Method: The four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed. Results: The approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions. Conclusion: The approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity. | An earlier version of this paper was published in @cite_4 . It already contained the 4-step approach as described in this paper. However, we added 3 additional projects to the maintainability case and a completely new security case in order to validate the applicability of the approach more broadly. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2004973017"
],
"abstract": [
"Assessing and predicting the complex concept of software quality is still challenging in practice as well as research. Activity-based quality models break down this complex concept into more concrete definitions, more precisely facts about the system, process and environment and their impact on activities performed on and with the system. However, these models lack an operationalisation that allows to use them in assessment and prediction of quality. Bayesian Networks (BN) have been shown to be a viable means for assessment and prediction incorporating variables with uncertainty. This paper describes how activity-based quality models can be used to derive BN models for quality assessment and prediction. The proposed approach is demonstrated in a proof of concept using publicly available data."
]
} |
1611.10187 | 2004973017 | Assessing and predicting the complex concept of software quality is still challenging in practice as well as research. Activity-based quality models break down this complex concept into more concrete definitions, more precisely facts about the system, process and environment and their impact on activities performed on and with the system. However, these models lack an operationalisation that allows to use them in assessment and prediction of quality. Bayesian Networks (BN) have been shown to be a viable means for assessment and prediction incorporating variables with uncertainty. This paper describes how activity-based quality models can be used to derive BN models for quality assessment and prediction. The proposed approach is demonstrated in a proof of concept using publicly available data. | The basic idea to use Bayesian networks for assessing and predicting software quality has been developed mainly by Fenton, Neil and Littlewood. They introduced Bayesian networks as a useful tool and applied it in various contexts related to software quality. In @cite_7 they formulate a critique on current defect prediction models and suggest to use Bayesian networks. Other researchers also used Bayesian networks for software quality prediction similarly @cite_6 @cite_16 | {
"cite_N": [
"@cite_16",
"@cite_6",
"@cite_7"
],
"mid": [
"2063473175",
"2024676793",
"2101728371"
],
"abstract": [
"Bayesian network (BN) models can be used to predict the level of fault injection in the different phases of the software development process. This paper describes the techniques used by Motorola to develop BN fault prediction models using product, usage and process information. It also includes a procedure created to validate and calibrate the BN models. Statistical techniques are applied that provide a measurement of the quality of the predictions made by the models, and directions on how the models could be improved. The validation method described in this paper considers two alternative ways to produce an improved network. The first is based mainly on modifying the ranges of the existing nodes, adding interdependencies between them, and varying the weight values associated with each of the nodes that are used as inputs to the intermediate nodes of the BN model. The second possibility uses linear regression and principal component analysis to build the intermediate and output nodes of the network. The paper closes with some encouraging results, and outlines a number of unresolved questions. Copyright © 2006 John Wiley & Sons, Ltd.",
"Recently, software development projects have been required to produce highly reliable systems within a short period and with low cost. In such situation, software quality prediction helps to confirm that the software product satisfies required quality expectations. In this paper, by using a Bayesian Belief Network (BBN), we try to construct a prediction model based on relationships elicited from the embedded software development process. On the one hand, according to a characteristic of embedded software development, we especially propose to classify test and debug activities into two distinct activities on software and hardware. Then we call the proposed model \"the BBN for an embedded software development process\". On the other hand, we define \"the BBN for a general software development process\" to be a model which does not consider this classification of activity, but rather, merges them into a single activity. Finally, we conducted experimental evaluations by applying these two BBNs to actual project data. As the results of our experiments show, the BBN for the embedded software development process is superior to the BBN for the general development process and is applicable effectively for effective practical use.",
"Many organizations want to predict the number of defects (faults) in software systems, before they are deployed, to gauge the likely delivered quality and maintenance effort. To help in this numerous software metrics and statistical models have been developed, with a correspondingly large literature. We provide a critical review of this literature and the state-of-the-art. Most of the wide range of prediction models use size and complexity metrics to predict defects. Others are based on testing data, the \"quality\" of the development process, or take a multivariate approach. The authors of the models have often made heroic contributions to a subject otherwise bereft of empirical studies. However, there are a number of serious theoretical and practical problems in many studies. The models are weak because of their inability to cope with the, as yet, unknown relationship between defects and failures. There are fundamental statistical and data quality problems that undermine model validity. More significantly many prediction models tend to model only part of the underlying problem and seriously misspecify it. To illustrate these points the Goldilock's Conjecture, that there is an optimum module size, is used to show the considerable problems inherent in current defect prediction approaches. Careful and considered analysis of past and new results shows that the conjecture lacks support and that some models are misleading. We recommend holistic models for software defect prediction, using Bayesian belief networks, as alternative approaches to the single-issue models used at present. We also argue for research into a theory of \"software decomposition\" in order to test hypotheses about defect introduction and help construct a better science of software engineering."
]
} |
1611.10187 | 2004973017 | Assessing and predicting the complex concept of software quality is still challenging in practice as well as research. Activity-based quality models break down this complex concept into more concrete definitions, more precisely facts about the system, process and environment and their impact on activities performed on and with the system. However, these models lack an operationalisation that allows to use them in assessment and prediction of quality. Bayesian Networks (BN) have been shown to be a viable means for assessment and prediction incorporating variables with uncertainty. This paper describes how activity-based quality models can be used to derive BN models for quality assessment and prediction. The proposed approach is demonstrated in a proof of concept using publicly available data. | Beaver, Schiavone and Berrios @cite_12 also used a Bayesian network to predict software quality including diverse factors such as team skill and process maturity. In his thesis @cite_5 , Beaver even compared the approach to neural networks and Least Squares regressions that both were outperformed by the Bayesian network. However, they did not rely on a structured quality model as in our approach. | {
"cite_N": [
"@cite_5",
"@cite_12"
],
"mid": [
"2466294644",
"2128589942"
],
"abstract": [
"Software practitioners lack a consistent approach to assessing and predicting quality within their products. This research proposes a software quality model that accounts for the influences of development team skill experience, process maturity, and problem complexity throughout the software engineering life cycle. The model is structured using Bayesian Belief Networks and, unlike previous efforts, uses widely-accepted software engineering standards and in-use industry techniques to quantify the indicators and measures of software quality. Data from 28 software engineering projects was acquired for this study, and was used for validation and comparison of the presented software quality models. Three Bayesian model structures are explored and the structure with the highest performance in terms of accuracy of fit and predictive validity is reported. In addition, the Bayesian Belief Networks are compared to both Least Squares Regression and Neural Networks in order to identify the technique is best suited to modeling software product quality. The results indicate that Bayesian Belief Networks outperform both Least Squares Regression and Neural Networks in terms of producing modeled software quality variables that fit the distribution of actual software quality values, and in accurately forecasting 25 different indicators of software quality. Between the Bayesian model structures, the simplest structure, which relates software quality variables to their correlated causal factors, was found to be the most effective in modeling software quality. In addition, the results reveal that the collective skill and experience of the development team, over process maturity or problem complexity, has the most significant impact on the quality of software products.",
"The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian belief networks, a machine learning method. This research presents a Bayesian network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts."
]
} |
1611.10231 | 2559116024 | Mobile devices have become ubiquitous due to centralization of private user information, contacts, messages and multiple sensors. Google Android, an open-source mobile Operating System (OS), is currently the market leader. Android popularity has motivated the malware authors to employ set of cyber attacks leveraging code obfuscation techniques. Obfuscation is an action that modifies an application (app) code, preserving the original semantics and functionality to evade anti-malware. Code obfuscation is a contentious issue. Theoretical code analysis techniques indicate that, attaining a verifiable and secure obfuscation is impossible. However, obfuscation tools and techniques are popular both among malware developers (to evade anti-malware) and commercial software developers (protect intellectual rights). We conducted a survey to uncover answers to concrete and relevant questions concerning Android code obfuscation and protection techniques. The purpose of this paper is to review code obfuscation and code protection practices, and evaluate efficacy of existing code de-obfuscation tools. In particular, we discuss Android code obfuscation methods, custom app protection techniques, and various de-obfuscation methods. Furthermore, we review and analyse the obfuscation techniques used by malware authors to evade analysis efforts. We believe that, there is a need to investigate efficiency of the defense techniques used for code protection. This survey would be beneficial to the researchers and practitioners, to understand obfuscation and de-obfuscation techniques to propose novel solutions on Android. | @cite_5 discuss the evolution of mobile malware, their infection and distribution techniques and detail them with different case studies. The authors survey the greyware and malicious app detection techniques between 2010 and 2013. Furthermore, the authors discuss various research problems, briefly discussing the impact of malware detection. @cite_21 , authors discuss the Android security issues, malware penetration and various defense methods for app analysis and mobile platform security. Furthermore, the authors briefly review obfuscation techniques. However, they concentrate more on malicious repackaging, a common problem with Android apps. | {
"cite_N": [
"@cite_5",
"@cite_21"
],
"mid": [
"2095470000",
"2060537671"
],
"abstract": [
"Smart devices equipped with powerful sensing, computing and networking capabilities have proliferated lately, ranging from popular smartphones and tablets to Internet appliances, smart TVs, and others that will soon appear (e.g., watches, glasses, and clothes). One key feature of such devices is their ability to incorporate third-party apps from a variety of markets. This poses strong security and privacy issues to users and infrastructure operators, particularly through software of malicious (or dubious) nature that can easily get access to the services provided by the device and collect sensory data and personal information. Malware in current smart devices —mostly smartphones and tablets— have rocketed in the last few years, in some cases supported by sophisticated techniques purposely designed to overcome security architectures currently in use by such devices. Even though important advances have been made on malware detection in traditional personal computers during the last decades, adopting and adapting those techniques to smart devices is a challenging problem. For example, power consumption is one major constraint that makes unaffordable to run traditional detection engines on the device, while externalized (i.e., cloud-based) techniques rise many privacy concerns. This article examines the problem of malware in smart devices and recent progress made in detection techniques. We first present a detailed analysis on how malware has evolved over the last years for the most popular platforms. We identify exhibited behaviors, pursued goals, infection and distribution strategies, etc. and provide numerous examples through case studies of the most relevant specimens. We next survey, classify and discuss efforts made on detecting both malware and other suspicious software (grayware), concentrating on the 20 most relevant techniques proposed between 2010 and 2013. Based on the conclusions extracted from this study, we finally provide constructive discussion on open research problems and areas where we believe that more work is needed.",
"Smartphones have become pervasive due to the availability of office applications, Internet, games, vehicle guidance using location-based services apart from conventional services such as voice calls, SMSes, and multimedia services. Android devices have gained huge market share due to the open architecture of Android and the popularity of its application programming interface (APIs) in the developer community. Increased popularity of the Android devices and associated monetary benefits attracted the malware developers, resulting in big rise of the Android malware apps between 2010 and 2014. Academic researchers and commercial antimalware companies have realized that the conventional signature-based and static analysis methods are vulnerable. In particular, the prevalent stealth techniques, such as encryption, code transformation, and environment-aware approaches, are capable of generating variants of known malware. This has led to the use of behavior-, anomaly-, and dynamic-analysis-based methods. Since a single approach may be ineffective against the advanced techniques, multiple complementary approaches can be used in tandem for effective malware detection. The existing reviews extensively cover the smartphone OS security. However, we believe that the security of Android, with particular focus on malware growth, study of antianalysis techniques, and existing detection methodologies, needs an extensive coverage. In this survey, we discuss the Android security enforcement mechanisms, threats to the existing security enforcements and related issues, malware growth timeline between 2010 and 2014, and stealth techniques employed by the malware authors, in addition to the existing detection methods. This review gives an insight into the strengths and shortcomings of the known research methodologies and provides a platform, to the researchers and practitioners, toward proposing the next-generation Android security, analysis, and malware detection techniques."
]
} |
1611.10231 | 2559116024 | Mobile devices have become ubiquitous due to centralization of private user information, contacts, messages and multiple sensors. Google Android, an open-source mobile Operating System (OS), is currently the market leader. Android popularity has motivated the malware authors to employ set of cyber attacks leveraging code obfuscation techniques. Obfuscation is an action that modifies an application (app) code, preserving the original semantics and functionality to evade anti-malware. Code obfuscation is a contentious issue. Theoretical code analysis techniques indicate that, attaining a verifiable and secure obfuscation is impossible. However, obfuscation tools and techniques are popular both among malware developers (to evade anti-malware) and commercial software developers (protect intellectual rights). We conducted a survey to uncover answers to concrete and relevant questions concerning Android code obfuscation and protection techniques. The purpose of this paper is to review code obfuscation and code protection practices, and evaluate efficacy of existing code de-obfuscation tools. In particular, we discuss Android code obfuscation methods, custom app protection techniques, and various de-obfuscation methods. Furthermore, we review and analyse the obfuscation techniques used by malware authors to evade analysis efforts. We believe that, there is a need to investigate efficiency of the defense techniques used for code protection. This survey would be beneficial to the researchers and practitioners, to understand obfuscation and de-obfuscation techniques to propose novel solutions on Android. | @cite_87 evaluate smartphone code protection techniques. The authors analyze and evaluate software de-obfuscation techniques. The survey is more general targeting software protection techniques and analysis methods. However, our target is, Android specific obfuscation techniques. | {
"cite_N": [
"@cite_87"
],
"mid": [
"2314464932"
],
"abstract": [
"Software obfuscation has always been a controversially discussed research area. While theoretical results indicate that provably secure obfuscation in general is impossible, its widespread application in malware and commercial software shows that it is nevertheless popular in practice. Still, it remains largely unexplored to what extent today’s software obfuscations keep up with state-of-the-art code analysis and where we stand in the arms race between software developers and code analysts. The main goal of this survey is to analyze the effectiveness of different classes of software obfuscation against the continuously improving deobfuscation techniques and off-the-shelf code analysis tools. The answer very much depends on the goals of the analyst and the available resources. On the one hand, many forms of lightweight static analysis have difficulties with even basic obfuscation schemes, which explains the unbroken popularity of obfuscation among malware writers. On the other hand, more expensive analysis techniques, in particular when used interactively by a human analyst, can easily defeat many obfuscations. As a result, software obfuscation for the purpose of intellectual property protection remains highly challenging."
]
} |
1611.10231 | 2559116024 | Mobile devices have become ubiquitous due to centralization of private user information, contacts, messages and multiple sensors. Google Android, an open-source mobile Operating System (OS), is currently the market leader. Android popularity has motivated the malware authors to employ set of cyber attacks leveraging code obfuscation techniques. Obfuscation is an action that modifies an application (app) code, preserving the original semantics and functionality to evade anti-malware. Code obfuscation is a contentious issue. Theoretical code analysis techniques indicate that, attaining a verifiable and secure obfuscation is impossible. However, obfuscation tools and techniques are popular both among malware developers (to evade anti-malware) and commercial software developers (protect intellectual rights). We conducted a survey to uncover answers to concrete and relevant questions concerning Android code obfuscation and protection techniques. The purpose of this paper is to review code obfuscation and code protection practices, and evaluate efficacy of existing code de-obfuscation tools. In particular, we discuss Android code obfuscation methods, custom app protection techniques, and various de-obfuscation methods. Furthermore, we review and analyse the obfuscation techniques used by malware authors to evade analysis efforts. We believe that, there is a need to investigate efficiency of the defense techniques used for code protection. This survey would be beneficial to the researchers and practitioners, to understand obfuscation and de-obfuscation techniques to propose novel solutions on Android. | The proposed review is a comprehensive discussion on source code obfuscation, code protection, Android specific obfuscation, and code protection tools. To the best of our knowledge, we are the first to investigate code protection and malware obfuscation techniques for the Android platform. We discuss Collberg taxonomy @cite_69 , and expand source code, and bytecode obfuscation taxonomy. | {
"cite_N": [
"@cite_69"
],
"mid": [
"2146567535"
],
"abstract": [
"We identify three types of attack on the intellectual property contained in software and three corresponding technical defenses. A defense against reverse engineering is obfuscation, a process that renders software unintelligible but still functional. A defense against software piracy is watermarking, a process that makes it possible to determine the origin of software. A defense against tampering is tamper-proofing, so that unauthorized modifications to software (for example, to remove a watermark) will result in nonfunctional code. We briefly survey the available technology for each type of defense."
]
} |
1611.10010 | 2559242806 | We present a Deep Cuboid Detector which takes a consumer-quality RGB image of a cluttered scene and localizes all 3D cuboids (box-like objects). Contrary to classical approaches which fit a 3D model from low-level cues like corners, edges, and vanishing points, we propose an end-to-end deep learning system to detect cuboids across many semantic categories (e.g., ovens, shipping boxes, and furniture). We localize cuboids with a 2D bounding box, and simultaneously localize the cuboid's corners, effectively producing a 3D interpretation of box-like objects. We refine keypoints by pooling convolutional features iteratively, improving the baseline method significantly. Our deep learning cuboid detector is trained in an end-to-end fashion and is suitable for real-time applications in augmented reality (AR) and robotics. | Classical ideas on 3D scene and object recognition originate in Robert's Blocks-World @cite_52 and Biederman's Recognition-by-Components @cite_56 . These early works were overly reliant on bottom-up image processing, thus never worked satisfactorily on real images. Many modern approaches utilize a large training database of 3D models and some kind of learning for 2d-to-3d alignment @cite_23 @cite_47 @cite_53 @cite_49 . | {
"cite_N": [
"@cite_53",
"@cite_52",
"@cite_56",
"@cite_23",
"@cite_49",
"@cite_47"
],
"mid": [
"2345308174",
"2108729336",
"2156406284",
"2010625607",
"2342277278",
"2341204628"
],
"abstract": [
"Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.",
"Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering, 1963.",
"The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).",
"This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.",
"Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM SLAM methods fail (because of lack of texture and or wide baseline).",
"We introduce an approach that leverages surface normal predictions, along with appearance cues, to retrieve 3D models for objects depicted in 2D still images from a large CAD object library. Critical to the success of our approach is the ability to recover accurate surface normals for objects in the depicted scene. We introduce a skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction. Our model achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface normal prediction, and recovers fine object detail compared to previous methods. Furthermore, we develop a two-stream network over the input image and predicted surface normals that jointly learns pose and style for CAD model retrieval. When using the predicted surface normals, our two-stream network matches prior work using surface normals computed from RGB-D images on the task of pose prediction, and achieves state of the art when using RGB-D input. Finally, our two-stream network allows us to retrieve CAD models that better match the style and pose of a depicted object compared with baseline approaches."
]
} |
1611.10010 | 2559242806 | We present a Deep Cuboid Detector which takes a consumer-quality RGB image of a cluttered scene and localizes all 3D cuboids (box-like objects). Contrary to classical approaches which fit a 3D model from low-level cues like corners, edges, and vanishing points, we propose an end-to-end deep learning system to detect cuboids across many semantic categories (e.g., ovens, shipping boxes, and furniture). We localize cuboids with a 2D bounding box, and simultaneously localize the cuboid's corners, effectively producing a 3D interpretation of box-like objects. We refine keypoints by pooling convolutional features iteratively, improving the baseline method significantly. Our deep learning cuboid detector is trained in an end-to-end fashion and is suitable for real-time applications in augmented reality (AR) and robotics. | Cuboid detection has also been approached with geometry-based methods @cite_9 @cite_17 @cite_10 @cite_57 . Shortly after the success of Deformable Parts Model, researchers extended HOG-based models to cuboids @cite_22 @cite_13 . RGB-D based approaches to cuboid detection are also common @cite_7 . | {
"cite_N": [
"@cite_13",
"@cite_22",
"@cite_7",
"@cite_9",
"@cite_57",
"@cite_10",
"@cite_17"
],
"mid": [
"2111087635",
"2118824402",
"2034308880",
"",
"2032293070",
"2116851763",
"2018299767"
],
"abstract": [
"This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the well-acclaimed deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. Our model reasons about face visibility patters called aspects. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. Inference then entails sliding and rotating the box in 3D and scoring object hypotheses. While for inference we discretize the search space, the variables are continuous in our model. We demonstrate the effectiveness of our approach in indoor and outdoor scenarios, and show that our approach significantly outperforms the state-of-the-art in both 2D [1] and 3D object detection [2].",
"In this paper we seek to detect rectangular cuboids and localize their corners in uncalibrated single-view images depicting everyday scenes. In contrast to recent approaches that rely on detecting vanishing points of the scene and grouping line segments to form cuboids, we build a discriminative parts-based detector that models the appearance of the cuboid corners and internal edges while enforcing consistency to a 3D cuboid model. Our model copes with different 3D viewpoints and aspect ratios and is able to detect cuboids across many different object categories. We introduce a database of images with cuboid annotations that spans a variety of indoor and outdoor scenes and show qualitative and quantitative results on our collected database. Our model out-performs baseline detectors that use 2D constraints alone on the task of localizing cuboid corners.",
"We propose a novel linear method to match cuboids in indoor scenes using RGBD images from Kinect. Beyond depth maps, these cuboids reveal important structures of a scene. Instead of directly fitting cuboids to 3D data, we first construct cuboid candidates using super pixel pairs on a RGBD image, and then we optimize the configuration of the cuboids to satisfy the global structure constraints. The optimal configuration has low local matching costs, small object intersection and occlusion, and the cuboids tend to project to a large region in the image, the number of cuboids is optimized simultaneously. We formulate the multiple cuboid matching problem as a mixed integer linear program and solve the optimization efficiently with a branch and bound method. The optimization guarantees the global optimal solution. Our experiments on the Kinect RGBD images of a variety of indoor scenes show that our proposed method is efficient, accurate and robust against object appearance variations, occlusions and strong clutter.",
"",
"We present a human-centric paradigm for scene understanding. Our approach goes beyond estimating 3D scene geometry and predicts the \"workspace\" of a human which is represented by a data-driven vocabulary of human interactions. Our method builds upon the recent work in indoor scene understanding and the availability of motion capture data to create a joint space of human poses and scene geometry by modeling the physical interactions between the two. This joint space can then be used to predict potential human poses and joint locations from a single image. In a way, this work revisits the principle of Gibsonian affor-dances, reinterpreting it for the modern, data-driven era.",
"We study the problem of generating plausible interpretations of a scene from a collection of line segments automatically extracted from a single indoor image. We show that we can recognize the three dimensional structure of the interior of a building, even in the presence of occluding objects. Several physically valid structure hypotheses are proposed by geometric reasoning and verified to find the best fitting model to line segments, which is then converted to a full 3D model. Our experiments demonstrate that our structure recovery from line segments is comparable with methods using full image appearance. Our approach shows how a set of rules describing geometric constraints between groups of segments can be used to prune scene interpretation hypotheses and to generate the most plausible interpretation.",
"In this paper we consider the problem of recovering the free space of an indoor scene from its single image. We show that exploiting the box like geometric structure of furniture and constraints provided by the scene, allows us to recover the extent of major furniture objects in 3D. Our “boxy” detector localizes box shaped objects oriented parallel to the scene across different scales and object types, and thus blocks out the occupied space in the scene. To localize the objects more accurately in 3D we introduce a set of specially designed features that capture the floor contact points of the objects. Image based metrics are not very indicative of performance in 3D. We make the first attempt to evaluate single view based occupancy estimates for 3D errors and propose several task driven performance measures towards it. On our dataset of 592 indoor images marked with full 3D geometry of the scene, we show that: (a) our detector works well using image based metrics; (b) our refinement method produces significant improvements in localization in 3D; and (c) if one evaluates using 3D metrics, our method offers major improvements over other single view based scene geometry estimation methods."
]
} |
1611.10010 | 2559242806 | We present a Deep Cuboid Detector which takes a consumer-quality RGB image of a cluttered scene and localizes all 3D cuboids (box-like objects). Contrary to classical approaches which fit a 3D model from low-level cues like corners, edges, and vanishing points, we propose an end-to-end deep learning system to detect cuboids across many semantic categories (e.g., ovens, shipping boxes, and furniture). We localize cuboids with a 2D bounding box, and simultaneously localize the cuboid's corners, effectively producing a 3D interpretation of box-like objects. We refine keypoints by pooling convolutional features iteratively, improving the baseline method significantly. Our deep learning cuboid detector is trained in an end-to-end fashion and is suitable for real-time applications in augmented reality (AR) and robotics. | Our work revisits the problem of cuboid detection that Xiao al. introduced in @cite_22 . They use a Deformable Parts-based Model that uses HOG classifiers to detect cuboid vertices in different views. Their model has four components for scoring a final cuboid configuration: score from the HOG classifier, 2D vertex displacement, edge alignment score and a 3D shape score that takes into account how close the predicted vertices are to a cuboid in 3D. Their approach jointly optimizes over visual evidence (corners and edges) found in the image while penalizing the predictions that stray too far from an actual 3D cuboid. A major limitation of their approach is the computationally expensive test-time iterative optimization step. Not only is their HOG-based model inferior to its modern CNN-based counterpart (as we demonstrate in our experiments), their approach takes more than a minute to process a single image. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2118824402"
],
"abstract": [
"In this paper we seek to detect rectangular cuboids and localize their corners in uncalibrated single-view images depicting everyday scenes. In contrast to recent approaches that rely on detecting vanishing points of the scene and grouping line segments to form cuboids, we build a discriminative parts-based detector that models the appearance of the cuboid corners and internal edges while enforcing consistency to a 3D cuboid model. Our model copes with different 3D viewpoints and aspect ratios and is able to detect cuboids across many different object categories. We introduce a database of images with cuboid annotations that spans a variety of indoor and outdoor scenes and show qualitative and quantitative results on our collected database. Our model out-performs baseline detectors that use 2D constraints alone on the task of localizing cuboid corners."
]
} |
1611.10010 | 2559242806 | We present a Deep Cuboid Detector which takes a consumer-quality RGB image of a cluttered scene and localizes all 3D cuboids (box-like objects). Contrary to classical approaches which fit a 3D model from low-level cues like corners, edges, and vanishing points, we propose an end-to-end deep learning system to detect cuboids across many semantic categories (e.g., ovens, shipping boxes, and furniture). We localize cuboids with a 2D bounding box, and simultaneously localize the cuboid's corners, effectively producing a 3D interpretation of box-like objects. We refine keypoints by pooling convolutional features iteratively, improving the baseline method significantly. Our deep learning cuboid detector is trained in an end-to-end fashion and is suitable for real-time applications in augmented reality (AR) and robotics. | 3D object localization using keypoints is commonly studied @cite_44 @cite_31 @cite_28 @cite_40 @cite_50 @cite_39 . 3D keypoint detection using deep networks is also gaining popularity @cite_53 @cite_20 . There has been a resurgence in work that aims to align 3D models of objects in single view images @cite_19 @cite_41 @cite_42 . | {
"cite_N": [
"@cite_28",
"@cite_41",
"@cite_53",
"@cite_42",
"@cite_39",
"@cite_44",
"@cite_19",
"@cite_40",
"@cite_50",
"@cite_31",
"@cite_20"
],
"mid": [
"2000433020",
"",
"2345308174",
"1946609740",
"1798868054",
"2123456673",
"1949568868",
"2058761328",
"",
"2006249718",
""
],
"abstract": [
"We present a framework that retains ambiguity in feature matching to increase the performance of 3D object recognition systems. Whereas previous systems removed ambiguous correspondences during matching, we show that ambiguity should be resolved during hypothesis testing and not at the matching phase. To preserve ambiguity during matching, we vector quantize and match model features in a hierarchical manner. This matching technique allows our system to be more robust to the distribution of model descriptors in feature space. We also show that we can address recognition under arbitrary viewpoint by using our framework to facilitate matching of additional features extracted from affine transformed model images. The evaluation of our algorithms in 3D object recognition is demonstrated on a difficult dataset of 620 images.",
"",
"Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.",
"Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6 in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D.",
"We present a method that estimates in real-time and under challenging conditions the 3D pose of a known object. Our method relies only on grayscale images since depth cameras fail on met allic objects, it can handle poorly textured objects, and cluttered, changing environments, the pose it predicts degrades gracefully in presence of large occlusions. As a result, by contrast with the state-of-the-art, our method is suitable for practical Augmented Reality applications even in industrial environments. To be robust to occlusions, we first learn to detect some parts of the target object. Our key idea is to then predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible, when several parts are visible, we can combine them easily to compute a better pose of the object, the 3D pose we obtain is usually very accurate, even when only few parts are visible.",
"We propose a novel and robust model to represent and learn generic 3D object categories. We aim to solve the problem of true 3D object categorization for handling arbitrary rotations and scale changes. Our approach is to capture a compact model of an object category by linking together diagnostic parts of the objects from different viewing points. We emphasize on the fact that our \"parts\" are large and discriminative regions of the objects that are composed of many local invariant features. Instead of recovering a full 3D geometry, we connect these parts through their mutual homographic transformation. The resulting model is a compact summarization of both the appearance and geometry information of the object class. We propose a framework in which learning is done via minimal supervision compared to previous works. Our results on categorization show superior performances to state-of-the-art algorithms such as (, 2006). Furthermore, we have compiled a new 3D object dataset that consists of 10 different object categories. We have tested our algorithm on this dataset and have obtained highly promising results.",
"The goal of this work is to represent objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene and then using a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel surface normals in images containing renderings of synthetic objects. When tested on real data, our method outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place into the scene the model that fits best. We observe a 48 relative improvement in performance at the task of 3D detection over the current state-of-the-art [34], while being an order of magnitude faster.",
"We present MOPED, a framework for Multiple Object Pose Estimation and Detection that seamlessly integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We address two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation. We achieve robust performance with Iterative Clustering Estimation (ICE), a novel algorithm that iteratively combines feature clustering with robust pose estimation. Feature clustering quickly partitions the scene and produces object hypotheses. The hypotheses are used to further refine the feature clusters, and the two steps iterate until convergence. ICE is easy to parallelize, and easily integrates single- and multi-camera object recognition and pose estimation. We also introduce a novel object hypothesis scoring function based on M-estimator theory, and a novel pose clustering algorithm that robustly handles recognition outliers. We achieve scalability and low latency with an improved feature matching algorithm for large databases, a GPU CPU hybrid architecture that exploits parallelism at all levels, and an optimized resource scheduler. We provide extensive experimental results demonstrating state-of-the-art performance in terms of recognition, scalability, and latency in real-world robotic applications.",
"",
"One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physical scenes from images. The second section introduces representations for 3D object categories that account for the intrinsically 3D nature of objects and provide robustness to change in viewpoints. The third section discusses strategies to unite inference of scene geometry and object pose and identity into a coherent scene interpretation. Each section broadly surveys important ideas from cognitive science and artificial intelligence research, organizes and discusses key concepts and techniques from recent work in computer vision, and describes a few sample approaches in detail. Newcomers to computer vision will benefit from introductions to basic concepts, such as single-view geometry and image classification, while experts and novices alike may find inspiration from the book's organization and discussion of the most recent ideas in 3D scene understanding and 3D object recognition. Specific topics include: mathematics of perspective geometry; visual elements of the physical scene, structural 3D scene representations; techniques and features for image and region categorization; historical perspective, computational models, and datasets and machine learning techniques for 3D object recognition; inferences of geometrical attributes of objects, such as size and pose; and probabilistic and feature-passing approaches for contextual reasoning about 3D objects and scenes. Table of Contents: Background on 3D Scene Models Single-view Geometry Modeling the Physical Scene Categorizing Images and Regions Examples of 3D Scene Interpretation Background on 3D Recognition Modeling 3D Objects Recognizing and Understanding 3D Objects Examples of 2D 1 2 Layout Models Reasoning about Objects and Scenes Cascades of Classifiers Conclusion and Future Directions",
""
]
} |
1611.10010 | 2559242806 | We present a Deep Cuboid Detector which takes a consumer-quality RGB image of a cluttered scene and localizes all 3D cuboids (box-like objects). Contrary to classical approaches which fit a 3D model from low-level cues like corners, edges, and vanishing points, we propose an end-to-end deep learning system to detect cuboids across many semantic categories (e.g., ovens, shipping boxes, and furniture). We localize cuboids with a 2D bounding box, and simultaneously localize the cuboid's corners, effectively producing a 3D interpretation of box-like objects. We refine keypoints by pooling convolutional features iteratively, improving the baseline method significantly. Our deep learning cuboid detector is trained in an end-to-end fashion and is suitable for real-time applications in augmented reality (AR) and robotics. | The iterative vertex refinement component of our approach is similar to the iterative error feedback approach of @cite_43 , the network cascades in @cite_46 , the iterative bounding box regression of Multi-region CNN @cite_3 and Inside-Outside Networks @cite_8 . Such iterative models have been reinterpreted as Recurrent Neural Networks in @cite_54 @cite_6 , and while most applications focus on human pose estimation, the ideas can easily be extended to cuboid detection. | {
"cite_N": [
"@cite_8",
"@cite_54",
"@cite_3",
"@cite_6",
"@cite_43",
"@cite_46"
],
"mid": [
"",
"2363162442",
"1932624639",
"2518965973",
"1537698211",
"2949295283"
],
"abstract": [
"",
"We propose a novel ConvNet model for predicting 2D human body poses in an image. The model regresses a heatmap representation for each body keypoint, and is able to learn and represent both the part appearances and the context of the part configuration. We make the following three contributions: (i) an architecture combining a feed forward module with a recurrent module, where the recurrent module can be run iteratively to improve the performance; (ii) the model can be trained end-to-end and from scratch, with auxiliary losses incorporated to improve performance; (iii) we investigate whether keypoint visibility can also be predicted. The model is evaluated on two benchmark datasets. The result is a simple architecture that achieves performance on par with the state of the art, but without the complexity of a graphical model stage (or layers).",
"We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin.",
"This paper is on human pose estimation using Convolutional Neural Networks. Our main contribution is a CNN cascaded architecture specifically designed for learning part relationships and spatial context, and robustly inferring pose even for the case of severe part occlusions. To this end, we propose a detection-followed-by-regression CNN cascade. The first part of our cascade outputs part detection heatmaps and the second part performs regression on these heatmaps. The benefits of the proposed architecture are multi-fold: It guides the network where to focus in the image and effectively encodes part constraints and context. More importantly, it can effectively cope with occlusions because part detection heatmaps for occluded parts provide low confidence scores which subsequently guide the regression part of our network to rely on contextual information in order to predict the location of these parts. Additionally, we show that the proposed cascade is flexible enough to readily allow the integration of various CNN architectures for both detection and regression, including recent ones based on residual learning. Finally, we illustrate that our cascade achieves top performance on the MPII and LSP data sets. Code can be downloaded from http: www.cs.nott.ac.uk psxab5 .",
"Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.",
"Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place."
]
} |
1611.09960 | 2949499025 | Large-scale datasets have driven the rapid development of deep neural networks for visual recognition. However, annotating a massive dataset is expensive and time-consuming. Web images and their labels are, in comparison, much easier to obtain, but direct training on such automatically harvested images can lead to unsatisfactory performance, because the noisy labels of Web images adversely affect the learned recognition models. To address this drawback we propose an end-to-end weakly-supervised deep learning framework which is robust to the label noise in Web images. The proposed framework relies on two unified strategies -- random grouping and attention -- to effectively reduce the negative impact of noisy web image annotations. Specifically, random grouping stacks multiple images into a single training instance and thus increases the labeling accuracy at the instance level. Attention, on the other hand, suppresses the noisy signals from both incorrectly labeled images and less discriminative image regions. By conducting intensive experiments on two challenging datasets, including a newly collected fine-grained dataset with Web images of different car models, the superior performance of the proposed methods over competitive baselines is clearly demonstrated. | Related to our work, the attentive mechanisms have been applied to many computer vision tasks @cite_45 @cite_51 @cite_31 @cite_4 @cite_14 @cite_59 @cite_34 @cite_13 @cite_26 @cite_54 to help improve the performance. To guide the models' focus on the objects specified by the question or caption, attention models are designed to pay attention to local CNN features in the input image @cite_53 @cite_40 @cite_14 @cite_59 @cite_54 . The attentive mechanism has also been used to handle sequential problems in neural machine translation @cite_8 @cite_44 and manage memory access mechanisms for memory networks @cite_5 and neural turing machines @cite_55 . Different from the above methods, we are the first to apply the attention mechanism to cope with noisy labels. It can not only detect discriminative local feature regions, but also serves to filter out noisy signals from the mislabeled samples in the training instance. | {
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_8",
"@cite_54",
"@cite_53",
"@cite_55",
"@cite_44",
"@cite_40",
"@cite_45",
"@cite_59",
"@cite_5",
"@cite_31",
"@cite_34",
"@cite_51"
],
"mid": [
"",
"",
"",
"",
"2133564696",
"",
"2950178297",
"2950527759",
"",
"2171810632",
"1484210532",
"",
"",
"",
"",
""
],
"abstract": [
"",
"",
"",
"",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.",
"",
"This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"",
"",
"",
"",
""
]
} |
1611.09799 | 2952627838 | This paper proposes a simple test for compositionality (i.e., literal usage) of a word or phrase in a context-specific way. The test is computationally simple, relying on no external resources and only uses a set of trained word vectors. Experiments show that the proposed method is competitive with state of the art and displays high accuracy in context-specific compositionality detection of a variety of natural language phenomena (idiomaticity, sarcasm, metaphor) for different datasets in multiple languages. The key insight is to connect compositionality to a curious geometric property of word embeddings, which is of independent interest. | Average sentence approximation : Using the average of word embeddings to represent the sentence is a simple, yet robust, approach in several settings. For instance, such a representation is successfully used for sentential sentiment prediction @cite_34 and in @cite_5 to study text similarity. Average word embeddings are also used @cite_45 in conjunction with a neural network architecture to predict the surrounding sentences from the input sentence embeddings. Computational models of sentential semantics have also shown to be robustly handled by average word embeddings @cite_46 @cite_26 @cite_27 @cite_41 . In the compositionality testing experiments of this paper, the average representation performs reasonably well, although the subspace representation is statistically significantly superior. | {
"cite_N": [
"@cite_26",
"@cite_41",
"@cite_27",
"@cite_45",
"@cite_5",
"@cite_46",
"@cite_34"
],
"mid": [
"2400549584",
"2175723921",
"2515741950",
"",
"2028742638",
"1591825359",
""
],
"abstract": [
"Computational models of semantics have emerged as powerful tools for natural language processing. Recent work has developed models to handle compositionality, but these models have typically been evaluated on large, uncontrolled corpora. In this paper, we constructed a controlled set of phrase pairs and collected phrase similarity judgments, revealing novel insights into human semantic representation. None of the computational models that we considered were able to capture the pattern of human judgments. The results of a second experiment, using the same stimuli with a transformational judgment task, support a transformational account of similarity, according to which the similarity between phrases is inversely related to the number of edits required to transform one mental model into another. Taken together, our results indicate that popular models of compositional semantics do not capture important facets of human semantic representation.",
"We consider the problem of learning general-purpose, paraphrastic sentence embeddings based on supervision from the Paraphrase Database (, 2013). We compare six compositional architectures, evaluating them on annotated textual similarity datasets drawn both from the same distribution as the training data and from a wide range of other domains. We find that the most complex architectures, such as long short-term memory (LSTM) recurrent neural networks, perform best on the in-domain data. However, in out-of-domain scenarios, simple architectures such as word averaging vastly outperform LSTMs. Our simplest averaging model is even competitive with systems tuned for the particular tasks while also being extremely efficient and easy to use. In order to better understand how these architectures compare, we conduct further experiments on three supervised NLP tasks: sentence similarity, entailment, and sentiment classification. We again find that the word averaging models perform well for sentence similarity and entailment, outperforming LSTMs. However, on sentiment classification, we find that the LSTM performs very strongly-even recording new state-of-the-art performance on the Stanford Sentiment Treebank. We then demonstrate how to combine our pretrained sentence embeddings with these supervised tasks, using them both as a prior and as a black box feature extractor. This leads to performance rivaling the state of the art on the SICK similarity and entailment tasks. We release all of our resources to the research community with the hope that they can serve as the new baseline for further work on universal sentence embeddings.",
"There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector's dimensionality on the resulting representations.",
"",
"Determining semantic similarity between texts is important in many tasks in information retrieval such as search, query suggestion, automatic summarization and image finding. Many approaches have been suggested, based on lexical matching, handcrafted patterns, syntactic parse trees, external sources of structured semantic knowledge and distributional semantics. However, lexical features, like string matching, do not capture semantic similarity beyond a trivial level. Furthermore, handcrafted patterns and external sources of structured semantic knowledge cannot be assumed to be available in all circumstances and for all domains. Lastly, approaches depending on parse trees are restricted to syntactically well-formed texts, typically of one sentence in length. We investigate whether determining short text similarity is possible using only semantic features---where by semantic we mean, pertaining to a representation of meaning---rather than relying on similarity in lexical or syntactic representations. We use word embeddings, vector representations of terms, computed from unlabelled data, that represent terms in a semantic space in which proximity of vectors can be interpreted as semantic similarity. We propose to go from word-level to text-level semantics by combining insights from methods based on external sources of semantic knowledge with word embeddings. A novel feature of our approach is that an arbitrary number of word embedding sets can be incorporated. We derive multiple types of meta-features from the comparison of the word vectors for short text pairs, and from the vector means of their respective word embeddings. The features representing labelled short text pairs are used to train a supervised learning algorithm. We use the trained model at testing time to predict the semantic similarity of new, unlabelled pairs of short texts We show on a publicly available evaluation set commonly used for the task of semantic similarity that our method outperforms baseline methods that work under the same conditions.",
"Answer sentence selection is the task of identifying sentences that contain the answer to a given question. This is an important problem in its own right as well as in the larger context of open domain question answering. We propose a novel approach to solving this task via means of distributed representations, and learn to match questions with answers by considering their semantic encoding. This contrasts prior work on this task, which typically relies on classifiers with large numbers of hand-crafted syntactic and semantic features and various external resources. Our approach does not require any feature engineering nor does it involve specialist linguistic data, making this model easily applicable to a wide range of domains and languages. Experimental results on a standard benchmark dataset from TREC demonstrate that---despite its simplicity---our model matches state of the art performance on the answer sentence selection task.",
""
]
} |
1611.09799 | 2952627838 | This paper proposes a simple test for compositionality (i.e., literal usage) of a word or phrase in a context-specific way. The test is computationally simple, relying on no external resources and only uses a set of trained word vectors. Experiments show that the proposed method is competitive with state of the art and displays high accuracy in context-specific compositionality detection of a variety of natural language phenomena (idiomaticity, sarcasm, metaphor) for different datasets in multiple languages. The key insight is to connect compositionality to a curious geometric property of word embeddings, which is of independent interest. | In terms of distributed representation, methods include Latent Semantic Analysis @cite_6 and word embeddings which have been extaordinarily successful representations of word semantics, eg., word2vec and GloVe @cite_29 @cite_43 @cite_32 . @cite_2 is a recent work exploring compositionality in conjunction with word embeddings; however, an aspect not considered is that compositionality does not only depend on the phrase but also on its context -- this results in an inability to identify the context-based compositionality of polysemous phrases like . | {
"cite_N": [
"@cite_29",
"@cite_32",
"@cite_6",
"@cite_43",
"@cite_2"
],
"mid": [
"",
"2949364118",
"2074228526",
"2250539671",
"2293834552"
],
"abstract": [
"",
"There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.",
"Making use of latent semantic analysis, we explore the hypothesis that local linguistic context can serve to identify multi-word expressions that have non-compositional meanings. We propose that vector-similarity between distribution vectors associated with an MWE as a whole and those associated with its constituent parts can serve as a good measure of the degree to which the MWE is compositional. We present experiments that show that low (cosine) similarity does, in fact, correlate with non-compositionality.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"This paper presents the first attempt to use word embeddings to predict the compositionality of multiword expressions. We consider both single- and multi-prototype word embeddings. Experimental results show that, in combination with a back-off method based on string similarity, word embeddings outperform a method using count-based distributional similarity. Our best results are competitive with, or superior to, state-of-the-art methods over three standard compositionality datasets, which include two types of multiword expressions and two languages."
]
} |
1611.09799 | 2952627838 | This paper proposes a simple test for compositionality (i.e., literal usage) of a word or phrase in a context-specific way. The test is computationally simple, relying on no external resources and only uses a set of trained word vectors. Experiments show that the proposed method is competitive with state of the art and displays high accuracy in context-specific compositionality detection of a variety of natural language phenomena (idiomaticity, sarcasm, metaphor) for different datasets in multiple languages. The key insight is to connect compositionality to a curious geometric property of word embeddings, which is of independent interest. | Sarcasm Detection Sarcasm is a figurative expression conveying a meaning that is opposite of its literal one, usually in an implicit way, and is a crucial component in sentiment analysis . Such connections are explored in @cite_4 via a rule-based method of identifying known sarcastic phrases. Semi-supervised sarcasm identification algorithms are identified in @cite_17 @cite_11 @cite_8 @cite_4 , each using different sets of features (eg., word senses, uni, bi and trigrams) that are then fed into a classification system tuned on a large training dataset. Metaphor Detection Metaphors offer figurative interpretations and are a key feature of natural language @cite_36 . @cite_37 considers metaphor expression as a mapping from a source domain to a target domain, and develops a corpus-based system, CorMet, to discover such metaphorical equivalances based on WordNet. @cite_31 hypothesises that metaphorical usage is related to the degreee of contextual abstractness, which they quantify relying on the MRC Psycholinguistic Database Machine Usable Dictionary (MRCPD) @cite_7 . | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_17",
"@cite_31",
"@cite_11"
],
"mid": [
"2050797400",
"2251958472",
"1985449126",
"2114661483",
"1949881471",
"2165044314",
"",
"2250710744"
],
"abstract": [
"CorMet is a corpus-based system for discovering metaphorical mappings between concepts. It does this by finding systematic variations in domain-specific selectional preferences, which are inferred from large, dynamically mined Internet corpora.Metaphors transfer structure from a source domain to a target domain, making some concepts in the target domain metaphorically equivalent to concepts in the source domain. The verbs that select for a concept in the source domain tend to select for its metaphorical equivalent in the target domain. This regularity, detectable with a shallow linguistic analysis, is used to find the metaphorical interconcept mappings, which can then be used to infer the existence of higher-level conventional metaphors.Most other computational metaphor systems use small, hand-coded semantic knowledge bases and work on a few examples. Although CorMet's only knowledge base is WordNet (Fellbaum 1998) it can find the mappings constituting many conventional metaphors and in some cases recognize sentences instantiating those mappings. CorMet is tested on its ability to find a subset of the Master Metaphor List (Lakoff, Espenson, and Schwartz 1991).",
"Sarcasm is a common phenomenon in social media, and is inherently difficult to analyse, not just automatically but often for humans too. It has an important effect on sentiment, but is usually ignored in social media analysis, because it is considered too tricky to handle. While there exist a few systems which can detect sarcasm, almost no work has been carried out on studying the effect that sarcasm has on sentiment in tweets, and on incorporating this into automatic tools for sentiment analysis. We perform an analysis of the effect of sarcasm scope on the polarity of tweets, and have compiled a number of rules which enable us to improve the accuracy of sentiment analysis when sarcasm is known to be present. We consider in particular the effect of sentiment and sarcasm contained in hashtags, and have developed a hashtag tokeniser for GATE, so that sentiment and sarcasm found within hashtags can be detected more easily. According to our experiments, the hashtag tokenisation achieves 98 Precision, while the sarcasm detection achieved 91 Precision and polarity detection 80 .",
"This paper describes a computerised database of psycholinguistic information. Semantic, syntactic, phonological and orthographic information about some or all of the 98,538 words in the database is accessible, by using a specially-written and very simple programming language. Word-association data are also included in the database. Some examples are given of the use of the database for selection of stimuli to be used in psycholinguistic experimentation or linguistic research.",
"To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with the hashtag ‘#sarcasm’. We collected a training corpus of about 78 thousand Dutch tweets with this hashtag. Assuming that the human labeling is correct (annotation of a sample indicates that about 85 of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a test set of a day’s stream of 3.3 million Dutch tweets. Of the 135 explicitly marked tweets on this day, we detect 101 (75 ) when we remove the hashtag. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 30 of the top-250 ranked tweets are indeed sarcastic. Analysis shows that sarcasm is often signalled by hyperbole, using intensifiers and exclamations; in contrast, non-hyperbolic sarcastic messages often receive an explicit marker. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of nonverbal expressions that people employ in live interaction when conveying sarcasm.",
"THE ‘lOURNAL OF PHILOSOPHY FOUNDEI) BY FREDERICK J. E. WOODBRIDGE AND WENDELL '1'. EUSH Editors: john H. Randall, jr. (honorary) and Herbert W. Schneider (honorary), Bernard Berofsky. Arthur C. Danto, Sidney Morgenbesser, Charles D. Parsons, James ]. Walsh. Consulting Editors: James Higginbotham and Isaac Levi. Managing Editor: Leigh 8. Cauman. Editorial and Business office.q 720 Philosophy Hall, Columbia, New York, N.Y. 10027. All communications to the editors, manuscripts, advertise- ments, subscriptions, and changes of address should be sent to the Managing Editor. Annual subscription price (12 numbers), to individuals, @math 16.00; to students and to the retired and the unemployed, @math 1.30; special issues, @math 44.00 Spec men copy. Italy UL 5.000 s Europe Lil. 5 500 v Nonfuropean countries 5 7.50 SCIENTIA Via Guastalla 9 - 20122 Milano (Italy) - Telephone (02) 780.669 The appearance of the code at the bottom of the first page of an article in this jOlJRNAL indicates the copyright owner's consent that copies of the article be made for personal or classroom use. This consent is given on con- dition, however, that the copier pay the stated per-copy fee through the Copyright Clearance Center, lnc., P.O. Box 765, Schenectady, New York 12301, for all copying beyond that permitted by Sections 107 or 108 of the Us. Copyright Law. This consent does not extend to any other kinds oi copying. ti) Copyright 198 @math 03.40 © 1980 The journal of Philosophy, lnc. 453",
"Sarcasm is a form of speech act in which the speakers convey their message in an implicit way. The inherently ambiguous nature of sarcasm sometimes makes it hard even for humans to decide whether an utterance is sarcastic or not. Recognition of sarcasm can benefit many sentiment analysis NLP applications, such as review summarization, dialogue systems and review ranking systems. In this paper we experiment with semi-supervised sarcasm identification on two very different data sets: a collection of 5.9 million tweets collected from Twitter, and a collection of 66000 product reviews from Amazon. Using the Mechanical Turk we created a gold standard sample in which each sentence was tagged by 3 annotators, obtaining F-scores of 0.78 on the product reviews dataset and 0.83 on the Twitter dataset. We discuss the differences between the datasets and how the algorithm uses them (e.g., for the Amazon dataset the algorithm makes use of structured information). We also discuss the utility of Twitter #sarcasm hashtags for the task.",
"",
"A common form of sarcasm on Twitter consists of a positive sentiment contrasted with a negative situation. For example, many sarcastic tweets include a positive sentiment, such as “love” or “enjoy”, followed by an expression that describes an undesirable activity or state (e.g., “taking exams” or “being ignored”). We have developed a sarcasm recognizer to identify this type of sarcasm in tweets. We present a novel bootstrapping algorithm that automatically learns lists of positive sentiment phrases and negative situation phrases from sarcastic tweets. We show that identifying contrasting contexts using the phrases learned through bootstrapping yields improved recall for sarcasm recognition."
]
} |
1611.09799 | 2952627838 | This paper proposes a simple test for compositionality (i.e., literal usage) of a word or phrase in a context-specific way. The test is computationally simple, relying on no external resources and only uses a set of trained word vectors. Experiments show that the proposed method is competitive with state of the art and displays high accuracy in context-specific compositionality detection of a variety of natural language phenomena (idiomaticity, sarcasm, metaphor) for different datasets in multiple languages. The key insight is to connect compositionality to a curious geometric property of word embeddings, which is of independent interest. | @cite_25 proposes a detection method according to lexical imaginability, topic chaining and semantic clustering. Their method is also based on the linguistic resource of MRCPD. @cite_16 focuses on Subject-Verb-Object and Adjective-Noun structures, and use word abstractness and imagineability as well as supersenses as features for metaphor detection. Besides MRCPD, they also have recourse to WordNet for word supersenses. | {
"cite_N": [
"@cite_16",
"@cite_25"
],
"mid": [
"2126530744",
"163945276"
],
"abstract": [
"We show that it is possible to reliably discriminate whether a syntactic construction is meant literally or metaphorically using lexical semantic features of the words that participate in the construction. Our model is constructed using English resources, and we obtain state-of-the-art performance relative to previous work in this language. Using a model transfer approach by pivoting through a bilingual dictionary, we show our model can identify metaphoric expressions in other languages. We provide results on three new test sets in Spanish, Farsi, and Russian. The results support the hypothesis that metaphors are conceptual, rather than lexical, in nature.",
"The reliable automated identification of metaphors still remains a challenge in metaphor research due to ambiguity between semantic and contextual interpretation of individual lexical items. In this article, we describe a novel approach to metaphor identification which is based on three intersecting methods: imageability, topic chaining, and semantic clustering. Our hypothesis is that metaphors are likely to use highly imageable words that do not generally have a topical or semantic association with the surrounding context. Our method is thus the following: (1) identify the highly imageable portions of a paragraph, using psycholinguistic measures of imageability, (2) exclude imageability peaks that are part of a topic chain, and (3) exclude imageability peaks that show a semantic relationship to the main topics. We are currently working towards fully automating this method for a number of languages."
]
} |
1611.10022 | 2955101264 | Most requirements engineering (RE) process improvement approaches are solution-driven and activity-based. They focus on the assessment of the RE of a company against an external norm of best practices. A consequence is that practitioners often have to rely on an improvement approach that skips a profound problem analysis and that results in an RE approach that might be alien to the organisational needs. In recent years, we have developed an RE improvement approach (called ) that guides a holistic RE improvement against individual goals of a company putting primary attention to the quality of the artefacts. In this paper, we aim at exploring ArtREPI's benefits and limitations. We contribute an industrial evaluation of ArtREPI by relying on a case study research. Our results suggest that ArtREPI is well-suited for the establishment of an RE that reflects a specific organisational culture but to some extent at the cost of efficiency resulting from intensive discussions on a terminology that suits all involved stakeholders. Our results reveal first benefits and limitations, but we can also conclude the need of longitudinal and independent investigations for which we herewith lay the foundation. | In literature, there exist mostly solution-driven contributions @cite_3 . R-CMM, proposed by @cite_10 , is a prominent representative of these approaches. It is based on CMMI and an empirical investigation in twelve companies @cite_11 . The investigation revealed patterns and best practices based on problems experienced by practitioners. Therefore, it aimed at a generalised, external notion of RE quality. A technical validation using an expert panel @cite_17 further illustrates selected success criteria, such as understandability. Approaches of this category focus on a solution-driven benchmarking of the maturity of RE according to a specific norm of best practices and may thus lead to the problems described in the introduction (see also @cite_1 @cite_6 for richer investigations). | {
"cite_N": [
"@cite_11",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_10",
"@cite_17"
],
"mid": [
"2099942748",
"2019967993",
"2149518434",
"2963686848",
"2072576780",
"1996220078"
],
"abstract": [
"In this paper we discuss our study of the problems 12 software companies experienced in software development. In total we present qualitative data collected from 45 focus groups that involved over 200 software staff. We look at how different practitioner groups respond to software process improvement problems. We show our classification and analysis of this data using correspondence analysis. Correspondence analysis is a graphical data representation method new to software development research. The aim of the work we present is to develop a more holistic understanding of the problems practitioners are experiencing in their attempts to improve their software processes. Our main finding is that there is an association between a company’s capability maturity and patterns of reported problems. Organizational problems are more associated with high maturity companies than with low maturity companies. Low maturity companies are closely linked to problems relating directly to projects such as documentation, timescales, tools and technology. Our findings also confirm differences in practitioner group problems. Senior managers cite problems with goals, culture and politics. Project managers are concerned with timescales, change management, budgets and estimates. Developers are experiencing problems with requirements, testing, documentation, communication, tools and technology. These associations are displayed graphically through correspondence analysis maps.",
"This paper explores why organizations do not adopt CMMI (Capability Maturity Model Integration), by analysing two months of sales data collected by an Australian company selling CMMI appraisal and improvement services. The most frequent reasons given by organizations were: the organization was small; the services were too costly, the organization had no time, and the organization was using another SPI approach. Overall, we found small organizations not adopting CMMI tend to say that adopting it would be infeasible, but do not say it would be unbeneficial. We comment on the significance of our findings and research method for SPI research.",
"Requirements engineering (RE) is a key discipline in software development and several methods are available to help assess and improve RE processes. However, these methods rely on prescriptive models of RE; they do not, like other disciplines within software engineering, draw directly on stakeholder perceptions and subjective judgments. Given this backdrop, we present an empirical study in RE process assessment. Our aim was to investigate how stakeholder perceptions and process prescriptions can be combined during assessments to effectively inform RE process improvement. We first describe existing methods for RE process assessment and the role played by stakeholder perceptions and subjective judgments in the software engineering and management literature. We then present a method that combines perceptions and prescriptions in RE assessments together with an industrial case study in which the method was applied and evaluated over a three-year period at TelSoft. The data suggest that the combined method led to a comprehensive and rich assessment and it helped TelSoft consider RE as an important and integral part of the broader engineering context. This, in turn, led to improvements that combined plan-driven and adaptive principles for RE. Overall, the combined method helped TelSoft move from Level 1 to Level 2 in RE maturity, and the employees perceived the resulting engineering practices to be improved. Based on these results, we suggest that software managers and researchers combine stakeholder perceptions and process prescriptions as one way to effectively balance the specificity, comparability, and accuracy of software process assessments.",
"Abstract Context For many years, we have observed industry struggling in defining a high quality requirements engineering (RE) and researchers trying to understand industrial expectations and problems. Although we are investigating the discipline with a plethora of empirical studies, they still do not allow for empirical generalisations. Objective To lay an empirical and externally valid foundation about the state of the practice in RE, we aim at a series of open and reproducible surveys that allow us to steer future research in a problem-driven manner. Method We designed a globally distributed family of surveys in joint collaborations with different researchers and completed the first run in Germany. The instrument is based on a theory in the form of a set of hypotheses inferred from our experiences and available studies. We test each hypothesis in our theory and identify further candidates to extend the theory by correlation and Grounded Theory analysis. Results In this article, we report on the design of the family of surveys, its underlying theory, and the full results obtained from Germany with participants from 58 companies. The results reveal, for example, a tendency to improve RE via internally defined qualitative methods rather than relying on normative approaches like CMMI. We also discovered various RE problems that are statistically significant in practice. For instance, we could corroborate communication flaws or moving targets as problems in practice. Our results are not yet fully representative but already give first insights into current practices and problems in RE, and they allow us to draw lessons learnt for future replications. Conclusion Our results obtained from this first run in Germany make us confident that the survey design and instrument are well-suited to be replicated and, thereby, to create a generalisable empirical basis of RE in practice.",
"Both software organisations and the academic community are aware that the requirements phase of software development is in need of further support. We address this problem by creating a specialised Requirements Capability Maturity Model (R-CMM1). The model focuses on the requirements engineering process as defined within the established Software Engineering Institute's (SEI's) software process improvement framework. Our empirical work with software practitioners is a primary motivation for creating this requirements engineering process improvement model. Although all organisations in our study were involved in software process improvement (SPI), they all showed a lack of control over many requirement engineering activities. This paper describes how the requirements engineering (RE) process is decomposed and prioritised in accordance with maturity goals set by the SEI's Software Capability Maturity Model (SW CMM). Our R-CMM builds on the SEI's framework by identifying and defining recommended RE sub-processes that meet maturity goals. This new focus will help practitioners to define their RE process with a view to setting realistic goals for improvement.",
"In this paper we present components of a newly developed software process improvement model that aims to represent key practices in requirements engineering (RE). Our model is developed in response to practitioner needs highlighted in our empirical work with UK software development companies. We have now reached the stage in model development where we need some independent feedback as to how well our model meets our objectives. We perform this validation through involving a group of software process improvement and RE experts in examining our RE model components and completing a detailed questionnaire. A major part of this paper is devoted to explaining our validation methodology. There is very little in the literature that directly relates to how process models have been validated, therefore providing this transparency will benefit both the research community and practitioners. The validation methodology and the model itself contribute towards a better understanding of modelling RE processes."
]
} |
1611.10022 | 2955101264 | Most requirements engineering (RE) process improvement approaches are solution-driven and activity-based. They focus on the assessment of the RE of a company against an external norm of best practices. A consequence is that practitioners often have to rely on an improvement approach that skips a profound problem analysis and that results in an RE approach that might be alien to the organisational needs. In recent years, we have developed an RE improvement approach (called ) that guides a holistic RE improvement against individual goals of a company putting primary attention to the quality of the artefacts. In this paper, we aim at exploring ArtREPI's benefits and limitations. We contribute an industrial evaluation of ArtREPI by relying on a case study research. Our results suggest that ArtREPI is well-suited for the establishment of an RE that reflects a specific organisational culture but to some extent at the cost of efficiency resulting from intensive discussions on a terminology that suits all involved stakeholders. Our results reveal first benefits and limitations, but we can also conclude the need of longitudinal and independent investigations for which we herewith lay the foundation. | In response to their shortcoming, contributed an approach to problem-driven RE improvement @cite_4 called the iFLAP approach. Same as in ArtREPI, they make use of qualitative methods for the problem analysis and postulate the importance of strong stakeholder involvement. Although their concepts are promising to conduct a problem-driven REPI, the consequential next steps, i.e. the actual improvement realisation by crafting a new RE reference model, was not in scope of their contribution. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2094488960"
],
"abstract": [
"Software process improvement (SPI) is challenging, particularly for small and medium sized enterprises. Most existing SPI frameworks are either too expensive to deploy, or do not take an organizations' specific needs into consideration. There is a need for light weight SPI frameworks that enable practitioners to base improvement efforts on the issues that are the most critical for the specific organization. This paper presents a step-by-step guide to process assessment and improvement planning using improvement framework utilizing light weight assessment and improvement planning (iFLAP), aimed at practitioners undertaking SPI initiatives. In addition to the guide itself the industrial application of iFLAP is shown through two industrial cases. iFLAP is a packaged improvement framework, containing both assessment and improvement planning capabilities, explicitly developed to be light weight in nature. Assessment is performed by eliciting improvements issues based on the organization's experience and knowledge. The findings are validated through triangulation utilizing multiple data sources. iFLAP actively involves practitioners in prioritizing improvement issues and identifying dependencies between them in order to package improvements, and thus establish a, for the organization, realistic improvement plan. The two cases of iFLAP application in industry are presented together with lessons learned in order to exemplify actual use of the framework as well as challenges encountered."
]
} |
1611.10022 | 2955101264 | Most requirements engineering (RE) process improvement approaches are solution-driven and activity-based. They focus on the assessment of the RE of a company against an external norm of best practices. A consequence is that practitioners often have to rely on an improvement approach that skips a profound problem analysis and that results in an RE approach that might be alien to the organisational needs. In recent years, we have developed an RE improvement approach (called ) that guides a holistic RE improvement against individual goals of a company putting primary attention to the quality of the artefacts. In this paper, we aim at exploring ArtREPI's benefits and limitations. We contribute an industrial evaluation of ArtREPI by relying on a case study research. Our results suggest that ArtREPI is well-suited for the establishment of an RE that reflects a specific organisational culture but to some extent at the cost of efficiency resulting from intensive discussions on a terminology that suits all involved stakeholders. Our results reveal first benefits and limitations, but we can also conclude the need of longitudinal and independent investigations for which we herewith lay the foundation. | @cite_16 , we first introduced the basic concepts of ArtREPI and its design science principles. Since then, we realised our approach using the EPF Composer as a means of a technical validation and made all material (models, process documentation, document templates, and evaluation instruments) publicly available @cite_18 to support the dissemination. In a previous short paper @cite_9 , we then briefly reported on initial experiences from an ongoing case study. In the paper at hands, we report on the by now completed case study in detail including the case study design, the results containing a second case, and the implications the results have. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_16"
],
"mid": [
"177853461",
"",
"1496688783"
],
"abstract": [
"Requirements engineering process improvement (REPI) has gained much attention in research and practice. Most REPI approaches are of solution-driven and activity-based nature. They focus on the assessment of company-specific RE reference models against an external norm of best practices, and they propagate an improvement by forecasting the adaptation of the processes and methods in the RE reference model towards that norm. In recent years, we could develop a first problem-driven RE improvement approach that supports an improvement against individual goals and problems of a company putting primary attention to the quality of the RE artefacts (named ArtREPI). In this short paper, we briefly illustrate our resulting approach and report on our initial experiences from ongoing empirical evaluations in practice. We conclude with a summary of planned next steps.",
"",
"The importance of continuously improving requirements engineering (RE) has been recognised for many years. Similar to available software process improvement approaches, most RE improvement approaches focus on a normative and solution-driven assessment of companies rather than on a problem-driven RE improvement. The approaches dictate the implementation of a one-size-fits-all reference model without doing a proper problem investigation first, whereas the notion of quality factually depends on whether RE achieves company-specific goals. The approaches furthermore propagate process areas and methods, without proper awareness of the quality in the created artefacts on which the quality of many development phases rely. Little knowledge exists about how to conduct a problem-driven RE improvement that gives attention to the improvement of the artefacts. A promising solution is to start an improvement with an empirical investigation of the RE stakeholders, goals, and artefacts in the company to identify problems while abstracting from inherently complex processes. The RE improvement is then defined and implemented in joint action research workshops with the stakeholders to validate potential solutions while again concentrating on the artefacts. In this paper, we contribute an artefact-based, problem-driven RE improvement approach that emerged from a series of completed RE improvements. We discuss lessons learnt and present first result from an ongoing empirical evaluation at a German company. Our results suggest that our approach supports process engineers in a problem-driven RE improvement, but we need deeper examination of the resulting RE company standard, which is in scope of the final evaluation."
]
} |
1611.10024 | 1983179643 | The various influences in the processes and application domains make requirements engineering (RE) inherently complex and difficult to implement. In general, we have two options for establishing an RE approach: We can either establish an activity-based RE approach, or we can establish an artefact-based one where project participants concentrate on the RE artefacts rather than on the way of creating them. While a number of activity-based RE approaches have been proposed in recent years, we have gained much empirical evidence and experiences about the advantages of the artefact-based paradigm for RE. However, artefact orientation is still a young paradigm with various interpretations and practical manifestations whereby we need a clear understanding of its basic concepts and a consolidated and evaluated view on the paradigm. In this article, we contribute an artefact-based approach to RE [artefact model for domain-independent RE (AMDiRE)] that emerges from 6 years of experiences in fundamental and evidence-based research. To this end, we first discuss the basic notion of artefact orientation and its evolution in recent years. We briefly introduce a set of artefact-based RE models we developed in industrial research cooperations for different application domains and show their empirical evaluations and their dissemination into academia and practice, eventually leading to the AMDiRE approach. We conclude with a discussion of experiences we made during the development and different industrial evaluations, and lessons learnt. | is based on the idea of providing an RE reference model as an ordered set of activities and methods, each defining procedures and techniques for a particular purpose @cite_50 , from which project participants can select the appropriate one to design their project-specific RE process. Each activity, e.g. how to apply use cases @cite_6 , is performed by a particular role that creates the corresponding artefact type, e.g. the requirements specification. Each of those techniques is then placed into a particular sequence of application and used to specify the RE results @cite_41 . | {
"cite_N": [
"@cite_41",
"@cite_6",
"@cite_50"
],
"mid": [
"2033161635",
"",
"2124405605"
],
"abstract": [
"Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.",
"",
"This paper presents an overview of the field of software systems requirements engineering (RE). It describes the main areas of RE practice, and highlights some key open research issues for the future."
]
} |
1611.10024 | 1983179643 | The various influences in the processes and application domains make requirements engineering (RE) inherently complex and difficult to implement. In general, we have two options for establishing an RE approach: We can either establish an activity-based RE approach, or we can establish an artefact-based one where project participants concentrate on the RE artefacts rather than on the way of creating them. While a number of activity-based RE approaches have been proposed in recent years, we have gained much empirical evidence and experiences about the advantages of the artefact-based paradigm for RE. However, artefact orientation is still a young paradigm with various interpretations and practical manifestations whereby we need a clear understanding of its basic concepts and a consolidated and evaluated view on the paradigm. In this article, we contribute an artefact-based approach to RE [artefact model for domain-independent RE (AMDiRE)] that emerges from 6 years of experiences in fundamental and evidence-based research. To this end, we first discuss the basic notion of artefact orientation and its evolution in recent years. We briefly introduce a set of artefact-based RE models we developed in industrial research cooperations for different application domains and show their empirical evaluations and their dissemination into academia and practice, eventually leading to the AMDiRE approach. We conclude with a discussion of experiences we made during the development and different industrial evaluations, and lessons learnt. | Although the importance of a well-defined artefact model is recognised in the area of activity orientation @cite_51 , the definition of artefacts, their contents, and especially their dependencies is not in scope of available approaches. @cite_41 discovered that only 50 Considering the absence of strong empirical work in the area of activity orientation @cite_42 and, thus, following a purely argumentative line of reasoning, activity-oriented approaches still have difficulties to overcome the problem of providing a means to support a flexible RE process that guides the creation of consistent RE artefacts. In contrast, when following the principles of , we are supposed to define an RE reference model by defining the artefacts, their contents, and their dependencies rather than dictating the way of creating the artefacts, thus, supporting flexibility in the process and the creation of detailed, consistent RE artefacts. First evidence for the benefits of artefact orientation is provided by industrial case studies that evaluate both paradigms in a comparative manner, e.g. @cite_14 (see also Sect. ). | {
"cite_N": [
"@cite_41",
"@cite_14",
"@cite_42",
"@cite_51"
],
"mid": [
"2033161635",
"1973931357",
"2030636553",
"1599980315"
],
"abstract": [
"Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail.",
"[Background:] Nowadays, industries are facing the problem that the Requirements Engineering (RE) process is highly volatile, since it depends on project influences from the customer's domain or from process models used. Artefact-based approaches promise to provide guidance in the creation of consistent artefacts in volatile project environments, because these approaches concentrate on the artefacts and their dependencies, instead of prescribing processes. Yet missing, however, is empirical evidence on the advantages of applying artefact-based RE approaches in real projects. [Aim:] We developed a customisable artefact-based RE approach for the domain of business information systems. Our goal is to investigate the advantages and limitations of applying this customisable approach in an industrial context. [Method:] We conduct a case study with our artefact-based RE approach and its customisation procedure. For this, we apply it at a software development project at Siemens following the steps of the customisation procedure. We assess our approach in direct comparison with the previously used RE approach considering possible improvements in the process and in the quality of the produced artefacts. [Results:] We show that our approach is flexible enough to respond to the individual needs in the analysed project environment. Although the approach is not rated to be more productive, we find an improvement in the syntactic and the semantic quality of the created artefacts. [Conclusions:] We close a gap in the RE literature by giving empirical evidence on the advantages of artefact orientation in RE in an industrial setting.",
"Although software process proposals appear continuously, it is difficult to fit any of them into a given company as they are. Thus, some kind of adaptation or tailoring is always necessary. The goal of software process tailoring is to adapt an \"off-the-shelf\" software process to meet the needs of a specific organization or project. Although process tailoring is a mandatory activity in most software process proposals, it is usually carried out by following an ad-hoc approach, and the amount of research done on this topic to date can be considered small. This paper presents a systematic review of software process tailoring, analyzing the existing approaches towards this activity, discussing the main issues related to the problem, and providing an up-to-date and complete framework in which to position new research activities.",
"This article presents a model for projects that have to adhere to Enterprise Architecture (EA) in order for their results to be aligned with the broader organization. The model features project artifacts (i.e. deliverables such as Software Architecture Documents), their mutual relationships, their relationship with EA, and the processes in which they are created and tested on conformance. We start with applying Activity Theory to show the crucial mediating role that artifacts have in projects and to identify and justify the new EA-related artifacts we introduce. We subsequently incorporate these findings and existing best practices in a standard systems development approach in order to create a practical model that projects can apply for EA conformance. This model features both new, dedicated EA artifacts, and well-known existing artifacts of which we describe the way they should conform to EA. Finally, two action research studies are used to empirically support the model."
]
} |
1611.10024 | 1983179643 | The various influences in the processes and application domains make requirements engineering (RE) inherently complex and difficult to implement. In general, we have two options for establishing an RE approach: We can either establish an activity-based RE approach, or we can establish an artefact-based one where project participants concentrate on the RE artefacts rather than on the way of creating them. While a number of activity-based RE approaches have been proposed in recent years, we have gained much empirical evidence and experiences about the advantages of the artefact-based paradigm for RE. However, artefact orientation is still a young paradigm with various interpretations and practical manifestations whereby we need a clear understanding of its basic concepts and a consolidated and evaluated view on the paradigm. In this article, we contribute an artefact-based approach to RE [artefact model for domain-independent RE (AMDiRE)] that emerges from 6 years of experiences in fundamental and evidence-based research. To this end, we first discuss the basic notion of artefact orientation and its evolution in recent years. We briefly introduce a set of artefact-based RE models we developed in industrial research cooperations for different application domains and show their empirical evaluations and their dissemination into academia and practice, eventually leading to the AMDiRE approach. We conclude with a discussion of experiences we made during the development and different industrial evaluations, and lessons learnt. | First content-related dependencies resulting from refinement and decomposition in the modelling concepts are provided by [chp. 2] BPKR09 . These cover the basic concepts previously developed in a research co-operation between Siemens Corporate Research and Technische Universit "at M "unchen (TUM) @cite_12 (see also Sect. ). They provide an RE artefact model and name the key components for measurable RE artefacts, include a first process guideline, and suggest practices for their elaboration. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2401907287"
],
"abstract": [
"Der folgende technische Bericht beschreibt das Requirements Engineering Reference Model (REM). Die Zielsetzung von REM ist: (1) Die Definition eines Referenzmodells fur das Requirements Engineering (RE) mittels Festlegung der Kernmenge von zu entwickelnden RE Artefakten (Arbeitsprodukten im RE) einer Systementwicklung und ihren Abhangigkeiten - das RE Artefaktmodell, und (2) Die Einfuhrung und Anpassung von produkt- und projekt-spezifischen RE Prozessen auf Basis des RE Artefaktmodells. REM ist das Ergebnis langjahriger Forschung und Industriekooperationen in den Bereichen Systems- und Software-Engineering, modell-basiertes Requirements Engineering und Prozessdefinition des Lehrstuhls fur Software und Systems Engineering an der TU Munchen. The following technical report of the Technische Universitat Munchen (TUM) describes the Requirements Engineering Reference Model (REM). The purposes of REM are: (1) to define a reference model for RE that provides the core set of RE artifacts (work products) and their interdependencies, and (2) to guide the establishment and maintenance of product- and project-specific RE processes. REM is the result of long-term research and industrial cooperations in the fields of systems and software engineering, model-based requirements engineering and process definition of the chair of Software and Systems Engineering at the TUM"
]
} |
1611.10024 | 1983179643 | The various influences in the processes and application domains make requirements engineering (RE) inherently complex and difficult to implement. In general, we have two options for establishing an RE approach: We can either establish an activity-based RE approach, or we can establish an artefact-based one where project participants concentrate on the RE artefacts rather than on the way of creating them. While a number of activity-based RE approaches have been proposed in recent years, we have gained much empirical evidence and experiences about the advantages of the artefact-based paradigm for RE. However, artefact orientation is still a young paradigm with various interpretations and practical manifestations whereby we need a clear understanding of its basic concepts and a consolidated and evaluated view on the paradigm. In this article, we contribute an artefact-based approach to RE [artefact model for domain-independent RE (AMDiRE)] that emerges from 6 years of experiences in fundamental and evidence-based research. To this end, we first discuss the basic notion of artefact orientation and its evolution in recent years. We briefly introduce a set of artefact-based RE models we developed in industrial research cooperations for different application domains and show their empirical evaluations and their dissemination into academia and practice, eventually leading to the AMDiRE approach. We conclude with a discussion of experiences we made during the development and different industrial evaluations, and lessons learnt. | A meta model for our proposed paradigm is provided in @cite_0 . Over the years, we have instantiated this meta model for different domains of applications where the resulting artefact models have been evaluated and disseminated to practice. A discussion of those models is provided in Sect. . The models had all different contents, but they all relied on the same notion of artefact orientation that we introduce in the following. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2121843002"
],
"abstract": [
"Requirements Engineering (RE) processes are highly volatile due to dependencies on customers' capabilities or used process models, both complicating a standardised RE process. A promising solution is given by artefactorientation that emphasises the results rather than dictating a strict development process. At such a basis one is able to incorporate domain-specific methods for producing artefacts without having to take into account the variability of process definitions. Although artefacts are known to support customisable development processes, there still is no common agreement about the structure and semantics of artefact-based methodologies. In this paper we discuss different interpretations of the term artefact considering aspects like process integration capabilities and necessities within individual project environments. We contribute a meta model for artefact-orientation that is inferred from two RE models elaborated within industrial cooperation projects of our research group. We conclude with a discussion of performed case studies and ongoing work."
]
} |
1611.09769 | 2949966227 | Oral lesions are important findings on computed tomography (CT) images. In this study, a fully automatic method to detect oral lesions in mandibular region from dental CT images is proposed. Two methods were developed to recognize two types of lesions namely (1) Close border (CB) lesions and (2) Open border (OB) lesions, which cover most of the lesion types that can be found on CT images. For the detection of CB lesions, fifteen features were extracted from each initial lesion candidates and multi layer perceptron (MLP) neural network was used to classify suspicious regions. Moreover, OB lesions were detected using a rule based image processing method, where no feature extraction or classification algorithm were used. The results were validated using a CT dataset of 52 patients, where 22 patients had abnormalities and 30 patients were normal. Using non-training dataset, CB detection algorithm yielded 71 sensitivity with 0.31 false positives per patient. Furthermore, OB detection algorithm achieved 100 sensitivity with 0.13 false positives per patient. Results suggest that, the proposed framework, which consists of two methods, has the potential to be used in clinical context, and assist radiologists for better diagnosis. | (2012) @cite_10 proposed a CAD system that measures the cortical width of the mandible continuously to identify women with low bone mineral density (BMD) from dental panoramic images. The algorithm was developed using support vector machine classifier where images of 60 women were used for system training and 40 were used in testing. Results showed that the system is promising for identifying low skelet al BMD. (2013) @cite_9 also proposed a similar work for measuring mandibular cortical width with a 2.8 mm threshold. The algorithm showed 90 (2011) @cite_2 developed a CAD method to differentiate various metastatic lesions present in the human jawbones from Dental CT images. They developed a method to find most discriminative texture features from a region of interest, and compared support vector machine (SVM) and neural network classifier for classification among different bone groups. They have achieved an overall classification accuracy of 95 | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_2"
],
"mid": [
"2030582211",
"2020940363",
""
],
"abstract": [
"Purpose Mandibular cortical width (MCW) measured on dental panoramic radiographs (DPRs) was significantly correlated with bone mineral density. We developed a computer-aided diagnosis scheme that automatically measures MCW to assist dentists in describing a possible osteoporotic risk and suggesting further examinations.",
"Background Early diagnosis of osteoporosis can potentially decrease the risk of fractures and improve the quality of life. Detection of thin inferior cortices of the mandible on dental panoramic radiographs could be useful for identifying postmenopausal women with low bone mineral density (BMD) or osteoporosis. The aim of our study was to assess the diagnostic efficacy of using kernel-based support vector machine (SVM) learning regarding the cortical width of the mandible on dental panoramic radiographs to identify postmenopausal women with low BMD.",
""
]
} |
1611.09464 | 2558623550 | This paper presents a method to predict the future movements (location and gaze direction) of basketball players as a whole from their first person videos. The predicted behaviors reflect an individual physical space that affords to take the next actions while conforming to social behaviors by engaging to joint attention. Our key innovation is to use the 3D reconstruction of multiple first person cameras to automatically annotate each other's the visual semantics of social configurations. We leverage two learning signals uniquely embedded in first person videos. Individually, a first person video records the visual semantics of a spatial and social layout around a person that allows associating with past similar situations. Collectively, first person videos follow joint attention that can link the individuals to a group. We learn the egocentric visual semantics of group movements using a Siamese neural network to retrieve future trajectories. We consolidate the retrieved trajectories from all players by maximizing a measure of social compatibility---the gaze alignment towards joint attention predicted by their social formation, where the dynamics of joint attention is learned by a long-term recurrent convolutional network. This allows us to characterize which social configuration is more plausible and predict future group trajectories. | A group as a whole naturally creates a distinctive geometry of social formation that accommodates its social activity, e.g., a street busker's performance surrounded by a crowd with a half circular formation. Therefore, the formation can be a key indicator to classify the type of social configurations that influence individual behaviors with respect to the group. For instance, Kendon's F-formation theory @cite_2 characterizes the spatial arrangements of a social group, that can be used to identify social interactions in an image @cite_7 , and its validity is empirically proven using a large social interaction dataset @cite_11 . In dynamic social scenes, the formation enables re-identifying a group of people in a crowd from non-overlapping camera views @cite_12 , and the progression of formation change can be learned via inverse reinforcement learning @cite_45 and discriminative analysis (LSTM) @cite_29 . | {
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_45",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"2010236945",
"2424778531",
"2336387144",
"2067589702",
"",
"1912797782"
],
"abstract": [
"We present a novel approach for detecting social interactions in a crowded scene by employing solely visual cues. The detection of social interactions in unconstrained scenarios is a valuable and important task, especially for surveillance purposes. Our proposal is inspired by the social signaling literature, and in particular it considers the sociological notion of F-formation. An F-formation is a set of possible configurations in space that people may assume while participating in a social interaction. Our system takes as input the positions of the people in a scene and their (head) orientations; then, employing a voting strategy based on the Hough transform, it recognizes F-formations and the individuals associated with them. Experiments on simulations and real data promote our idea.",
"Pedestrians follow different trajectories to avoid obstacles and accommodate fellow pedestrians. Any autonomous vehicle navigating such a scene should be able to foresee the future positions of pedestrians and accordingly adjust its path to avoid collisions. This problem of trajectory prediction can be viewed as a sequence generation task, where we are interested in predicting the future trajectory of people based on their past positions. Following the recent success of Recurrent Neural Network (RNN) models for sequence prediction tasks, we propose an LSTM model which can learn general human movement and predict their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We demonstrate the performance of our method on several public datasets. Our model outperforms state-of-the-art methods on some of these datasets. We also analyze the trajectories predicted by our model to demonstrate the motion behaviour learned by our model.",
"We develop predictive models of pedestrian dynamics by encoding the coupled nature of multi-pedestrian interaction using game theory, and deep learning-based visual analysis to estimate person-specific behavior parameters. Building predictive models for multi-pedestrian interactions however, is very challenging due to two reasons: (1) the dynamics of interaction are complex interdependent processes, where the predicted behavior of one pedestrian can affect the actions taken by others and (2) dynamics are variable depending on an individuals physical characteristics (e.g., an older person may walk slowly while the younger person may walk faster). To address these challenges, we (1) utilize concepts from game theory to model the interdependent decision making process of multiple pedestrians and (2) use visual classifiers to learn a mapping from pedestrian appearance to behavior parameters. We evaluate our proposed model on several public multiple pedestrian interaction video datasets. Results show that our strategic planning model explains human interactions 25 better when compared to state-of-the-art methods.",
"Preface 1. Introduction 2. Some context for Context Analysis: a view of the origins of structural studies of face-to-face interaction 3. Some functions of gaze direction in two-person conversation 4. Movement co-ordination in social interaction 5. Some functions of the face in a kissing round 6. A description of some human greetings 7. Spatial organisation in social encounters: the F-formation system 8. Behavioural foundations for the process of frame-attunement in face-to-face interaction List of films cited References Index.",
"",
"This paper presents a method to predict social saliency, the likelihood of joint attention, given an input image or video by leveraging the social interaction data captured by first person cameras. Inspired by electric dipole moments, we introduce a social formation feature that encodes the geometric relationship between joint attention and its social formation. We learn this feature from the first person social interaction data where we can precisely measure the locations of joint attention and its associated members in 3D. An ensemble classifier is trained to learn the geometric relationship. Using the trained classifier, we predict social saliency in real-world scenes with multiple social groups including scenes from team sports captured in a third person view. Our representation does not require directional measurements such as gaze directions. A geometric analysis of social interactions in terms of the F-formation theory is also presented."
]
} |
1611.09485 | 2953908388 | We consider a problem of dispersing points on disjoint intervals on a line. Given n pairwise disjoint intervals sorted on a line, we want to find a point in each interval such that the minimum pairwise distance of these points is maximized. Based on a greedy strategy, we present a linear time algorithm for the problem. Further, we also solve in linear time the cycle version of the problem where the intervals are given on a cycle. | To the best of our knowledge, we have not found any previous work on the two problems studied in this paper. Our problems essentially belong to a family of geometric dispersion problems, which are NP-hard in general in two and higher dimensional space. For example, Baur and Fekete @cite_9 studied the problems of distributing a number of points within a polygonal region such that the points are dispersed far away from each other, and they showed that the problems cannot be approximated arbitrarily well in polynomial time, unless P=NP. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2174353807"
],
"abstract": [
"We consider problems of distributing a number of points within a polygonal region P, such that the points are far away from each other. Problems of this type have been considered before for the case where the possible locations form a discrete set. Dispersion problems are closely related to packing problems. While Hochbaum and Maass [20] have given a polynomial-time approximation scheme for packing, we show that geometric dispersion problems cannot be approximated arbitrarily well in polynomial time, unless P = NP. A special case of this observation solves an open problem by [31]. We give a 2 3 approximation algorithm for one version of the geometric dispersion problem. This algorithm is strongly polynomial in the size of the input, i.e., its running time does not depend on the area of P. We also discuss extensions and open problems."
]
} |
1611.09485 | 2953908388 | We consider a problem of dispersing points on disjoint intervals on a line. Given n pairwise disjoint intervals sorted on a line, we want to find a point in each interval such that the minimum pairwise distance of these points is maximized. Based on a greedy strategy, we present a linear time algorithm for the problem. Further, we also solve in linear time the cycle version of the problem where the intervals are given on a cycle. | Wang and Kuo @cite_14 considered the following two problems. Given a set @math of points and a value @math , find a largest subset of @math in which the distance of any two points is at least @math . Given a set @math of points and an integer @math , find a subset of @math points of @math to maximize the minimum distance of all pairs of points in the subset. It was shown in @cite_14 that both problems in 2D are NP-hard but can be solved efficiently in 1D. Refer to @cite_11 @cite_15 @cite_16 @cite_0 @cite_6 for other geometric dispersion problems. Dispersion problems in various non-geometric settings were also considered @cite_12 @cite_13 @cite_7 @cite_3 @cite_2 . These problems are in general NP-hard; approximation and heuristic algorithms were proposed for them. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"1975512711",
"2154629615",
"2064943293",
"1562565457",
"2077035981",
"1965734368",
"1986998000",
"2013451793",
"1488563335",
"2026626919",
"2104589295"
],
"abstract": [
"",
"Most optimization problems focus on efficiency-based objectives. Given the increasing awareness of system inequity resulting from solely pursuing efficiency, we conceptualize a number of new element-based equity-oriented measures in the dispersion context. We propose the equitable dispersion problem that maximizes the equity among elements based on the introduced measures in a system defined by inter-element distances. Given the proposed optimization framework, we develop corresponding mathematical programming formulations as well as their mixed-integer linear reformulations. We also discuss computational complexity issues, related graph-theoretic interpretations and provide some preliminary computational results.",
"Abstract The problem of finding the maximum diameter of n equal mutually disjoint circles inside a unit square is addressed in this paper. Exact solutions exist for only n = 1, …, 9,10,16,25,36 while for other n only conjectural solutions have been reported. In this work a max-min optimization approach is introduced which matches the best reported solutions in the literature for all n ⩽ 30, yields a better configuration for n = 15, and provides new results for n = 28 and 29.",
"Facility dispersion problem deals with the location of facilities on a network so as to maximize some function of the distances between facilities. We consider the problem under two different optimality criteria, namely maximizing the minimum distance (MAX-MIN) between any pair of facilities and maximizing the average distance (MAX-AVG) between any pair of facilities. Under either criterion, the problem is known to be NP-hard, even when the distances satisfy the triangle inequality. We consider the question of obtaining near-optimal solutions. For the MAX-MIN criterion, we show that if the distances do not satisfy the triangle inequality, there is no polynomial time relative approximation algorithm unless P=NP. When the distances do satisfy the triangle inequality, we present an efficient heuristic which provides a performance guarantee of 2, thus improving the performance guarantee of 3 proven in [Wh91]. We also prove that obtaining a performance guarantee of less than 2 is NP-hard. For the MAX-AVG criterion, we present a heuristic which provides a performance guarantee of 4, provided that the distances satisfy the triangle inequality. For the 1-dimensional dispersion problem, we provide polynomial time algorithms for obtaining optimal solutions under both MAX-MIN and MAX-AVG criteria. Using the latter algorithm, we obtain a heuristic which provides a performance guarantee of 4( ( 2 - 1 )) ≈ 1.657 for the 2-dimensional dispersion problem under the MAX-AVG criterion.",
"What is the densest packing of points in an infinite strip of widthw, where any two of the points must be separated by distance at least I? This question was raised by Fejes-Toth a number of years ago. The answer is trivial for MathType!MTEF!2!1!+- feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaacbiGaa83Dai abgsMiJoaakaaabaGaaG4maaWcbeaakiaac+cacaaIYaaaaa!3AF6! @math and, surprisingly, it is not difficult to prove [M2] for MathType!MTEF!2!1!+- feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaacbiGaa83Dai aa-1dacaWGUbWaaOaaaeaacaaIZaaaleqaaOGaai4laiaaikdaaaa!3AF2! @math , wheren is a positive integer, that the regular triangular lattice gives the optimal packing. Kertesz [K] solved the case MathType!MTEF!2!1!+- feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaacbiGaa83DaG qaaiaa+bcacaGF8aWaaOaaaeaacaaIYaaaleqaaaaa!392C! @math . Here we fill the first gap, i.e., the maximal density is determined for MathType!MTEF!2!1!+- feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaOaaaeaaie aacaWFZaaaleqaaOGaai4laiaaikdacqGH8aapcaWG3bGaeyizIm6a aOaaaeaacaaIZaaaleqaaaaa!3CCB! @math .",
"The dispersion problem arises in selecting facilities to maximize some function of the distances between the facilities. The problem also arises in selecting nondominated solutions for multiobjective decision making. It is known to be NP-hard under two objectives: maximizing the minimum distance (MAX-MIN) between any pair of facilities and maximizing the average distance (MAX-AVG). We consider the question of obtaining near-optimal solutions. for MAX-MIN, we show that if the distances do not satisfy the triangle inequality, there is no polynomial-time relative approximation algorithm unless P = NP. When the distances satisfy the triangle inequality, we analyze an efficient heuristic and show that it provides a performance guarantee of two. We also prove that obtaining a performance guarantee of less than two is NP-hard. for MAX-AVG, we analyze an efficient heuristic and show that it provides a performance guarantee of four when the distances satisfy the triangle inequality. We also present a polynomial-ti...",
"Considered is the problem of selecting p out of n given points in some space, such that the minimum distance between pairs of selected points is maximized. This objective may be appropriate if the selected points correspond to facility sites and the objective is to have as ‘dispersed’ a set as possible. This problem is NP-complete. Related graph theoretical problems are discussed, integer programming models are proposed, and an outline is given on a line search procedure to solve this problem optimally. Using a branch-and-bound procedure, problems with n = 40 and p = 16 can be solved on a microcomputer. The heuristic, developed to get an initial lower bound, finds an optimal solution for most of our random test problems. Also described is an extension to the basic problem that allows for preselected points, which may correspond to existing facility locations. This more general version can be solved by slight modifications of the algorithms.",
"",
"The Generalized Maximum Dispersion problem asks for a partition of a given graph into pvertex-disjoint sets, each of them having at most kvertices. The goal is to maximize the total edge-weight of the induced subgraphs. We present the first LP-based approximation algorithm.",
"In the maximum dispersion problem, a given set of objects has to be partitioned into a number of groups. Each object has a non-negative weight and each group has a target weight, which may be different for each group. In addition to meeting the target weight of each group, all objects assigned to the same group should be as dispersed as possible with respect to some distance measure between pairs of objects. Potential applications for this problem come from such diverse fields as the problem of creating study groups or the design of waste collection systems. We develop and compare two different (mixed-) integer linear programming formulations for the problem. We also study a specific relaxation that enables us to derive tight bounds that improve the effectiveness of the formulations. Thereby, we obtain an upper bound by finding in an auxiliary graph subsets of given size with minimal diameter. A lower bound is derived based on the relation of the optimal solution of the relaxation to the chromatic number of a series of auxiliary graphs. Finally, we propose an exact solution scheme for the maximum dispersion problem and present extensive computational experiments to assess its efficiency.",
"We consider the following packing problem. Let α be a fixed real in (0, 1]. We are given a bounding rectangle ρ and a set of n possibly intersecting unit disks whose centers lie in ρ. The task is to pack a set of m disjoint disks of radius α into ρ such that no disk in B intersects a disk in , where m is the maximum number of unit disks that can be packed. In this paper we present a polynomial-time algorithm for α = 2 3. So far only the case of packing squares has been considered. For that case, Baur and Fekete have given a polynomial-time algorithm for α = 2 3 and have shown that the problem cannot be solved in polynomial time for any α > 13 14 unless ."
]
} |
1611.09485 | 2953908388 | We consider a problem of dispersing points on disjoint intervals on a line. Given n pairwise disjoint intervals sorted on a line, we want to find a point in each interval such that the minimum pairwise distance of these points is maximized. Based on a greedy strategy, we present a linear time algorithm for the problem. Further, we also solve in linear time the cycle version of the problem where the intervals are given on a cycle. | On the other hand, problems on intervals usually have applications in other areas. For example, some problems on intervals are related to scheduling because the time period between the release time and the deadline of a job or task in scheduling problems can be considered as an interval on the line. From the interval point of view, @cite_17 studied the following problem on intervals: Given @math intervals on a line, determine whether it is possible to find a unit-length sub-interval in each input interval, such that these sub-intervals do not intersect. An @math time algorithm was given in @cite_17 for this problem. The optimization version of the above problem was also studied @cite_18 @cite_10 , where the goal is to find a maximum number of intervals that contain non-intersecting unit-length sub-intervals. @cite_18 gave an @math time algorithm for the problem, and later Vakhania @cite_10 improved the algorithm to @math time. The online version of the problem was also considered @cite_4 . Other optimization problems on intervals have also been considered, e.g., see @cite_17 @cite_1 @cite_5 @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"2047204419",
"1923579438",
"2070961414",
"2025231649",
"2161635161",
"2079322303",
"2055849951"
],
"abstract": [
"We consider the following scheduling problem. The input is a set of jobs with equal processing times, where each job is specified by its release time and deadline. The goal is to determine a single-processor nonpreemptive schedule that maximizes the number of completed jobs. In the online version, each job arrives at its release time. We give two online algorithms with competitive ratios below @math and show several lower bounds on the competitive ratios. First, we give a barely random @math -competitive algorithm that uses only one random bit. We also show a lower bound of @math on the competitive ratio of barely random algorithms that randomly choose one of two deterministic algorithms. If the two algorithms are selected with equal probability, we can further improve the bound to @math . Second, we give a deterministic @math -competitive algorithm in the model that allows restarts, and we show that in this model the ratio @math is optimal. For randomized algorithms with restarts we show a lower bound of @math .",
"The main aim of this note is to show that a polynomial-time algorithm for the scheduling problem 1|r j ; p j = p| ∑ Uj given by Carlier in (1981) is incorrect. In this problem we are given n jobs with release times and deadlines. All jobs have the same processing time p. The objective is to find a non-preemptive schedule that maximizes the number of jobs completed by their deadlines. The feasibility version of this problem, where we ask whether all jobs can meet their deadlines, has been studied thoroughly. Polynomial-time algorithms for this version were first found, independently, by Simons (1978) and Carlier (1981). A faster algorithm, with running time O(n log n), was subsequently given by (1981). The elegant feasibility algorithm of Carlier (1981) is based on dynamic programming and it processes jobs from left to right on the time-axis. For each time t, it constructs a partial schedule with jobs that complete at or before time t. Carlier also attempted to apply the same technique to design a polynomial-time algorithm for the maximization version, 1|r j ; p j = p| ∑ Uj , and claimed an O(n3 log n)-time algorithm. His result is now widely cited in the literature. We show, however, that this algorithm is not correct, by giving an instance on which it produces a sub-optimal schedule. Our counter-example can be, in fact, extended to support a broader claim, namely that even the general approach from Carlier (1981) cannot yield a polynomial-time algorithm. By this general approach we mean a class of algorithms that processes the input from left to right and make decisions based on the deadline ordering, and not their exact values. The question remains as to how efficiently can we solve the scheduling problem 1|r j ; p j = p| ∑ Uj . Baptiste (1999) gave an O(n7)-time algorithm for the more general version of this problem where jobs have weights. We show how to modify his algorithm to obtain a faster, O(n5)-time algorithm for the non-weighted case. These last two results are discussed only briefly in this note. The complete proofs can be found in the full version of this paper, see (2004).",
"We consider the problem of scheduling jobs with given release times and due dates on a single machine to minimize the maximum job lateness. It is NP-hard and remains such if the maximal job processing time is unrestricted and there is no constant bound on the difference between any job release times. We give a polynomial-time solution of the version in which the maximal job processing time and the differences between the job release times are bounded by a constant, which are restrictions that naturally arise in practice. Our algorithm reveals the inherent structure of the problem and also gives conditions when it is able to find an optimal solution unconditionally.",
"",
"",
"We study inherent structural properties of a strongly NP-hard problem of scheduling @math jobs with release times and due dates on a single machine to minimize the number of late jobs. Our study leads to two polynomial-time algorithms. The first algorithm with the time complexity @math solves the problem if during its execution no job with some special property occurs. The second algorithm solves the version of the problem when all jobs have the same length. The time complexity of the latter algorithm is @math , which is an improvement over the earlier known algorithm with the time complexity @math .",
"The basic problem considered is that of scheduling n unit-time tasks, with arbitrary release times and deadlines, so as to minimize the maximum task completion time. Previous work has shown that this problem can be solved rather easily when all release times are integers. We are concerned with the general case in which noninteger release times are allowed, a generalization that considerably increases the difficulty of the problem even for only a single processor. Our results are for the one-processor case, where we provide an @math algorithm based on the concept of “forbidden regions”."
]
} |
1611.09723 | 2557284390 | We establish mean-field limits for large-scale random-access networks with buffer dynamics and arbitrary interference graphs. While saturated-buffer scenarios have been widely investigated and yield useful throughput estimates for persistent sessions, they fail to capture the fluctuations in buffer contents over time, and provide no insight in the delay performance of flows with intermittent packet arrivals. Motivated by that issue, we explore in the present paper random-access networks with buffer dynamics, where flows with empty buffers refrain from competition for the medium. The occurrence of empty buffers thus results in a complex dynamic interaction between activity states and buffer contents, which severely complicates the performance analysis. Hence we focus on a many-sources regime where the total number of nodes grows large, which not only offers mathematical tractability but is also highly relevant with the densification of wireless networks as the Internet of Things emerges. We exploit time scale separation properties to prove that the properly scaled buffer occupancy process converges to the solution of a deterministic initial-value problem, and establish the existence and uniqueness of the associated fixed point. This approach simplifies the performance analysis of networks with huge numbers of nodes to a low-dimensional fixed-point calculation. For the case of a complete interference graph, we demonstrate asymptotic stability, provide a simple closed-form expression for the fixed point, and prove interchange of the mean-field and steady-state limits. This yields asymptotically exact approximations for key performance metrics, in particular the stationary buffer content and packet delay distributions. The methodological framework that we develop easily extends to various model refinements as will be illustrated by several examples. | In the performance analysis of CSMA networks, a common assumption is the existence of an underlying graph that represents interference between the various nodes in the network. An edge between two nodes means that destructive interference is caused by simultaneous transmission. Both empirical and theoretical support for the notion of an interference graph is provided in @cite_28 @cite_36 . | {
"cite_N": [
"@cite_28",
"@cite_36"
],
"mid": [
"1998893700",
"2109542432"
],
"abstract": [
"Efficient use of a wireless network requires that transmissions be grouped into feasible sets, where feasibility means that each transmission can be successfully decoded in spite of the interference caused by simultaneous transmissions. Feasibility is most closely modeled by a signal-to-interference-plus-noise (SINR) formula, which unfortunately is conceptually complicated, being an asymmetric, cumulative, many-to-one relationship. We re-examine how well graphs can capture wireless receptions as encoded in SINR relationships, placing them in a framework in order to understand the limits of such modelling. We seek for each wireless instance a pair of graphs that provide upper and lower bounds on the feasibility relation, while aiming to minimize the gap between the two graphs. The cost of a graph formulation is the worst gap over all instances, and the price of (graph) abstraction is the smallest cost of a graph formulation. We propose a family of conflict graphs that is parameterized by a non-decreasing sub-linear function, and show that with a judicious choice of functions, the graphs can capture feasibility with a cost of O(log* Δ), where Δ is the ratio between the longest and the shortest link length. This holds on the plane and more generally in doubling metrics. We use this to give greatly improved O(log* Δ)-approximation for fundamental link scheduling problems with arbitrary power control. We also explore the limits of graph representations and find that our upper bound is tight: the price of graph abstraction is Ω(log* Δ). In addition, we give strong impossibility results for general metrics, and for approximations in terms of the number of links.",
"Most spectrum distribution proposals today develop their allocation algorithms that use conflict graphs to capture interference relationships. The use of conflict graphs, however, is often questioned by the wireless community because of two issues. First, building conflict graphs requires significant overhead and hence generally does not scale to outdoor networks, and second, the resulting conflict graphs do not capture accumulative interference. In this paper, we use large-scale measurement data as ground truth to understand just how severe these issues are in practice, and whether they can be overcome. We build \"practical\" conflict graphs using measurement-calibrated propagation models, which remove the need for exhaustive signal measurements by interpolating signal strengths using calibrated models. These propagation models are imperfect, and we study the impact of their errors by tracing the impact on multiple steps in the process, from calibrating propagation models to predicting signal strength and building conflict graphs. At each step, we analyze the introduction, propagation and final impact of errors, by comparing each intermediate result to its ground truth counterpart generated from measurements. Our work produces several findings. Calibrated propagation models generate location-dependent prediction errors, ultimately producing conservative conflict graphs. While these \"estimated conflict graphs\" lose some spectrum utilization, their conservative nature improves reliability by reducing the impact of accumulative interference. Finally, we propose a graph augmentation technique that addresses any remaining accumulative interference, the last missing piece in a practical spectrum distribution system using measurement-calibrated conflict graphs."
]
} |
1611.09723 | 2557284390 | We establish mean-field limits for large-scale random-access networks with buffer dynamics and arbitrary interference graphs. While saturated-buffer scenarios have been widely investigated and yield useful throughput estimates for persistent sessions, they fail to capture the fluctuations in buffer contents over time, and provide no insight in the delay performance of flows with intermittent packet arrivals. Motivated by that issue, we explore in the present paper random-access networks with buffer dynamics, where flows with empty buffers refrain from competition for the medium. The occurrence of empty buffers thus results in a complex dynamic interaction between activity states and buffer contents, which severely complicates the performance analysis. Hence we focus on a many-sources regime where the total number of nodes grows large, which not only offers mathematical tractability but is also highly relevant with the densification of wireless networks as the Internet of Things emerges. We exploit time scale separation properties to prove that the properly scaled buffer occupancy process converges to the solution of a deterministic initial-value problem, and establish the existence and uniqueness of the associated fixed point. This approach simplifies the performance analysis of networks with huge numbers of nodes to a low-dimensional fixed-point calculation. For the case of a complete interference graph, we demonstrate asymptotic stability, provide a simple closed-form expression for the fixed point, and prove interchange of the mean-field and steady-state limits. This yields asymptotically exact approximations for key performance metrics, in particular the stationary buffer content and packet delay distributions. The methodological framework that we develop easily extends to various model refinements as will be illustrated by several examples. | When the nodes always have packets to transmit, the network is said to be saturated and the macroscopic activity behavior is amenable to analysis under the assumption of an interference graph. In particular, the activity process has an elegant product-form stationary distribution @cite_17 @cite_38 @cite_11 . The computation of the stationary distribution of the activity process reduces to the identification of all the subsets of nodes which may transmit simultaneously, namely the independent sets of the interference graph. | {
"cite_N": [
"@cite_38",
"@cite_11",
"@cite_17"
],
"mid": [
"2096448929",
"2125890347",
"2029650413"
],
"abstract": [
"This work started out with our discovery of a pattern of throughput distributions among links in IEEE 802.11 networks from experimental results. This pattern gives rise to an easy computation method, which we term back-of-the-envelop (BoE) computation. For many network configurations, very accurate results can be obtained by BoE within minutes, if not seconds, by simple hand computation. This allows us to make shortcuts in performance evaluation, bypassing complicated stochastic analysis. To explain BoE, we construct a theory based on the model of an “ideal CSMA network” (ICN). The BoE computation method emerges from ICN when we take the limit c → 0, where c is the ratio of the mean backoff countdown time to the mean transmission time in the CSMA protocol. Importantly, we derive a new mathematical result: the link throughputs of ICN are insensitive to the distributions of the backoff countdown time and transmission time (packet duration) given the ratio of their means c. This insensitivity result explains why BoE works so well for practical 802.11 networks, in which the backoff countdown process is one that has memory, and in which the packet size can be arbitrarily distributed. Our results indicate that BoE is a good approximation technique for modest-size networks such as those typically seen in 802.11 deployments. Beyond explaining BoE, the theoretical framework of ICN is also a foundation for fundamental understanding of very-large-scale CSMA networks. In particular, ICN is similar to the Ising model in statistical physics used to explain phenomena arising out of the interactions of a large number of entities. Many new research directions arise out of the ICN model.",
"Random-access algorithms such as the Carrier-Sense Multiple-Access (CSMA) protocol provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years, fairly tractable models have been shown to yield remarkably accurate throughput estimates for CSMA networks. These models typically assume that both the transmission durations and the back-off periods are exponentially distributed. We show that the stationary distribution of the system is in fact insensitive with respect to the transmission durations and the back-off times. These models primarily pertain to a saturated scenario where nodes always have packets to transmit. In reality however, the buffers may occasionally be empty as packets are randomly generated and transmitted over time. The resulting interplay between the activity states and the buffer contents gives rise to quite complicated queueing dynamics, and even establishing the stability criteria is usually a serious challenge. We explicitly identify the stability conditions in a few relevant scenarios, and illustrate the difficulties arising in other cases.",
"In this paper, we use a Markov model to develop a product form solution to efficiently analyze the throughput of arbitrary topology multihop packet radio networks that employ a carrier sensing multiple access (CSMA) protocol with perfect capture. We consider both exponential and nonexponential packet length distributions. Our method preserves the dependence between nodes, characteristic of CSMA, and determines the joint probability that nodes are transmitting. The product form analysis provides the basis for an automated algorithm that determines the maximum throughput in networks of size up to 100 radio nodes. Numerical examples for several networks are presented. This model has led to many theoretical and practical extensions. These include determination of conditions for product form analysis to hold, extension to other access protocols, and consideration of acknowledgments."
]
} |
1611.09723 | 2557284390 | We establish mean-field limits for large-scale random-access networks with buffer dynamics and arbitrary interference graphs. While saturated-buffer scenarios have been widely investigated and yield useful throughput estimates for persistent sessions, they fail to capture the fluctuations in buffer contents over time, and provide no insight in the delay performance of flows with intermittent packet arrivals. Motivated by that issue, we explore in the present paper random-access networks with buffer dynamics, where flows with empty buffers refrain from competition for the medium. The occurrence of empty buffers thus results in a complex dynamic interaction between activity states and buffer contents, which severely complicates the performance analysis. Hence we focus on a many-sources regime where the total number of nodes grows large, which not only offers mathematical tractability but is also highly relevant with the densification of wireless networks as the Internet of Things emerges. We exploit time scale separation properties to prove that the properly scaled buffer occupancy process converges to the solution of a deterministic initial-value problem, and establish the existence and uniqueness of the associated fixed point. This approach simplifies the performance analysis of networks with huge numbers of nodes to a low-dimensional fixed-point calculation. For the case of a complete interference graph, we demonstrate asymptotic stability, provide a simple closed-form expression for the fixed point, and prove interchange of the mean-field and steady-state limits. This yields asymptotically exact approximations for key performance metrics, in particular the stationary buffer content and packet delay distributions. The methodological framework that we develop easily extends to various model refinements as will be illustrated by several examples. | Real-life scenarios however involve unsaturated networks. Packets arrive at the various nodes according to exogenous processes, and buffers may drain from time to time as packets are transmitted. In particular, in IoT applications, sources are likely to generate packets only sporadically, with fairly tight delay constraints, and often have empty buffers. Since empty nodes temporarily refrain from the medium competition, the activity process is strictly intertwined with the buffer content process. In this situation, the product-form solution no longer holds @cite_1 @cite_11 and an exact stationary analysis does not seem tractable. Furthermore, not even stability conditions for the queueing dynamics in the nodes' buffers are known, let alone results for the stationary distribution of the buffer content process. | {
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"2118762973",
"2125890347"
],
"abstract": [
"Abstract Random-access algorithms such as the Carrier-Sense Multiple-Access (CSMA) protocol provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years fairly tractable models have been shown to yield remarkably accurate throughput estimates in scenarios with saturated buffers. In contrast, in non-saturated scenarios, where nodes refrain from competition for the medium when their buffers are empty, a complex two-way interaction arises between the activity states and the buffer contents of the various nodes. As a result, the throughput characteristics in such scenarios have largely remained elusive so far. In the present paper we provide a generic structural characterization of the throughput performance and corresponding stability region in terms of the individual saturation throughputs of the various nodes. While the saturation throughputs are difficult to explicitly determine in general, we identify certain cases where these values can be expressed in closed form. In addition, we demonstrate that various lower-dimensional facets of the stability region can be explicitly calculated as well, depending on the neighborhood structure of the interference graph. Illustrative examples and numerical results are presented to illuminate the main analytical findings.",
"Random-access algorithms such as the Carrier-Sense Multiple-Access (CSMA) protocol provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years, fairly tractable models have been shown to yield remarkably accurate throughput estimates for CSMA networks. These models typically assume that both the transmission durations and the back-off periods are exponentially distributed. We show that the stationary distribution of the system is in fact insensitive with respect to the transmission durations and the back-off times. These models primarily pertain to a saturated scenario where nodes always have packets to transmit. In reality however, the buffers may occasionally be empty as packets are randomly generated and transmitted over time. The resulting interplay between the activity states and the buffer contents gives rise to quite complicated queueing dynamics, and even establishing the stability criteria is usually a serious challenge. We explicitly identify the stability conditions in a few relevant scenarios, and illustrate the difficulties arising in other cases."
]
} |
1611.09723 | 2557284390 | We establish mean-field limits for large-scale random-access networks with buffer dynamics and arbitrary interference graphs. While saturated-buffer scenarios have been widely investigated and yield useful throughput estimates for persistent sessions, they fail to capture the fluctuations in buffer contents over time, and provide no insight in the delay performance of flows with intermittent packet arrivals. Motivated by that issue, we explore in the present paper random-access networks with buffer dynamics, where flows with empty buffers refrain from competition for the medium. The occurrence of empty buffers thus results in a complex dynamic interaction between activity states and buffer contents, which severely complicates the performance analysis. Hence we focus on a many-sources regime where the total number of nodes grows large, which not only offers mathematical tractability but is also highly relevant with the densification of wireless networks as the Internet of Things emerges. We exploit time scale separation properties to prove that the properly scaled buffer occupancy process converges to the solution of a deterministic initial-value problem, and establish the existence and uniqueness of the associated fixed point. This approach simplifies the performance analysis of networks with huge numbers of nodes to a low-dimensional fixed-point calculation. For the case of a complete interference graph, we demonstrate asymptotic stability, provide a simple closed-form expression for the fixed point, and prove interchange of the mean-field and steady-state limits. This yields asymptotically exact approximations for key performance metrics, in particular the stationary buffer content and packet delay distributions. The methodological framework that we develop easily extends to various model refinements as will be illustrated by several examples. | A thorough survey of mean-field analysis of random-access protocols is presented in @cite_24 . The work of Bianchi @cite_7 is a landmark paper which assumed a propagation of chaos property to hold so as to derive tractable formulae for the key performance measures of the system. The papers surveyed in @cite_24 mostly use mean-field theory to provide either evidence or objection for the propagation of chaos assumption of Bianchi. Among these papers, it is worth mentioning @cite_26 , where the authors investigated the existence of a global attractor for the mean-field system and provided sufficient conditions for its existence, deducing the validity of Bianchi's assumption. Further papers which deserve to be mentioned are @cite_25 @cite_9 , where the authors exploited mean-field theory so as to obtain approximations for key performance measures of large systems. In particular @cite_25 focuses on the characterization of the stability region, while @cite_9 examines the throughput performance of the system. None of the above-mentioned papers considered scenarios with unsaturated buffers, with the exception of @cite_25 , which however dealt with systems evolving in discrete time and did not consider performance metrics like packet delays. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_9",
"@cite_24",
"@cite_25"
],
"mid": [
"1968517161",
"2162598825",
"2159222170",
"1411539474",
"2142022499"
],
"abstract": [
"Performance evaluation of the 802.11 MAC protocol is classically based on the decoupling assumption, which hypothesizes that the backoff processes at different nodes are independent. This decoupling assumption results from mean field convergence and is generally true in transient regime in the asymptotic sense (when the number of wireless nodes tends to infinity), but, contrary to widespread belief, may not necessarily hold in stationary regime. The issue is often related with the existence and uniqueness of a solution to a fixed point equation; however, it was also recently shown that this condition is not sufficient; in contrast, a sufficient condition is a global stability property of the associated ordinary differential equation. In this paper, we give a simple condition that establishes the asymptotic validity of the decoupling assumption for the homogeneous case (all nodes have the same parameters). We also discuss the heterogeneous and the differentiated service cases and formulate a new ordinary differential equation. We show that the uniqueness of a solution to the associated fixed point equation is not sufficient; we exhibit one case where the fixed point equation has a unique solution but the decoupling assumption is not valid in the asymptotic sense in stationary regime.",
"The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.",
"This paper studies the performance of contention based medium access control (MAC) protocols. In particular, a simple and accurate technique for estimating the throughput of the IEEE 802.11 DCF protocol is developed. The technique is based on a rigorous analysis of the Markov chain that corresponds to the time evolution of the back-off processes at the contending nodes. An extension of the technique is presented to handle the case where service differentiation is provided with the use of heterogeneous protocol parameters, as, for example, in IEEE 802.11e EDCA protocol. Our results provide new insights into the operation of such protocols. The techniques developed in the paper are applicable to a wide variety of contention based MAC protocols.",
"In 1998, Giuseppe Bianchi introduced a mean field Markov model of the fundamental medium access control protocol used in Wireless Local Area Networks (WLANs). Due to the model’s intuitive appeal and the accuracy of its predictions, since then there has been a vast body of material published that extends and analyzes models of a similar character. As the majority of this development has taken place within the culture and nomenclature of the telecommunications community, the aim of the present article is to review this work in a way that makes it accessible to probabilists. In doing so, we hope to illustrate why this modeling approach has proved so popular, to explain what is known rigorously, and to draw attention to outstanding questions of a mathematical nature whose solution would be of interest to the telecommunications community. For non-saturated WLANs, these questions include rigorous support for its fundamental decoupling approximation, determination of the properties of the self-consistent equations and the identification of the queueing stability region.",
"Random Medium-Access-Control (MAC) algorithms have played an increasingly important role in the development of wired and wireless Local Area Networks (LANs) and yet the performance of even the simplest of these algorithms, such as slotted-Aloha, are still not clearly understood. In this paper we provide a general and accurate method to analyze networks where interfering users share a resource using random MAC algorithms. We show that this method is asymptotically exact when the number of users grows large, and explain why it also provides extremely accurate performance estimates even for small systems. We apply this analysis to solve two open problems: (a) We address the stability region of non-adaptive Aloha-like systems. Specifically, we consider a fixed number of buffered users receiving packets from independent exogenous processes and accessing the resource using Aloha-like algorithms. We provide an explicit expression to approximate the stability region of this system, and prove its accuracy. (b) We outline how to apply the analysis to predict the performance of adaptive MAC algorithms, such as the exponential back-off algorithm, in a system where saturated users interact through interference. In general, our analysis may be used to quantify how far from optimality the simple MAC algorithms used in LANs today are, and to determine if more complicated (e.g. queue-based) algorithms proposed in the literature could provide significant improvement in performance."
]
} |
1611.09405 | 2558258029 | We propose a single neural network architecture for two tasks: on-line keyword spotting and voice activity detection. We develop novel inference algorithms for an end-to-end Recurrent Neural Network trained with the Connectionist Temporal Classification loss function which allow our model to achieve high accuracy on both keyword spotting and voice activity detection without retraining. In contrast to prior voice activity detection models, our architecture does not require aligned training data and uses the same parameters as the keyword spotting model. This allows us to deploy a high quality voice activity detector with no additional memory or maintenance requirements. | The model is based on work in end-to-end speech recognition which uses the Connectionist Temporal Classification loss function coupled with deep Recurrent Neural Networks @cite_0 @cite_2 . In this work we develop the model and inference procedure for the KWS and VAD tasks. A thorough treatment of the benefits of this model for LVCSR is given in @cite_7 . A character-level CTC architecture was also recently adopted for keyword spotting @cite_4 , where it outperformed a DNN-HMM baseline, while a word-level CTC architecture was used for keyword spotting in @cite_9 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_0",
"@cite_2"
],
"mid": [
"2212465773",
"2949640717",
"1553469512",
"2102113734",
""
],
"abstract": [
"In this paper, we propose a context-aware keyword spotting model employing a character-level recurrent neural network (RNN) for spoken term detection in continuous speech. The RNN is end-to-end trained with connectionist temporal classification (CTC) to generate the probabilities of character and word-boundary labels. There is no need for the phonetic transcription, senone modeling, or system dictionary in training and testing. Also, keywords can easily be added and modified by editing the text based keyword list without retraining the RNN. Moreover, the unidirectional RNN processes an infinitely long input audio streams without pre-segmentation and keywords are detected with low-latency before the utterance is finished. Experimental results show that the proposed keyword spotter significantly outperforms the deep neural network (DNN) and hidden Markov model (HMM) based keyword-filler model even with less computations.",
"We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.",
"The goal of keyword spotting is to detect the presence of specific spoken words in unconstrained speech. The majority of keyword spotting systems are based on generative hidden Markov models and lack discriminative capabilities. However, discriminative keyword spotting systems are currently based on frame-level posterior probabilities of sub-word units. This paper presents a discriminative keyword spotting system based on recurrent neural networks only, that uses information from long time spans to estimate word-level posterior probabilities. In a keyword spotting task on a large database of unconstrained speech the system achieved a keyword spotting accuracy of 84.5",
"This paper presents a speech recognition system that directly transcribes audio data with text, without requiring an intermediate phonetic representation. The system is based on a combination of the deep bidirectional LSTM recurrent neural network architecture and the Connectionist Temporal Classification objective function. A modification to the objective function is introduced that trains the network to minimise the expectation of an arbitrary transcription loss function. This allows a direct optimisation of the word error rate, even in the absence of a lexicon or language model. The system achieves a word error rate of 27.3 on the Wall Street Journal corpus with no prior linguistic information, 21.9 with only a lexicon of allowed words, and 8.2 with a trigram language model. Combining the network with a baseline system further reduces the error rate to 6.7 .",
""
]
} |
1611.09394 | 2559332917 | Recognition of materials has proven to be a challenging problem due to the wide variation in appearance within and between categories. Global image context, such as where the material is or what object it makes up, can be crucial to recognizing the material. Existing methods, however, operate on an implicit fusion of materials and context by using large receptive fields as input (i.e., large image patches). Many recent material recognition methods treat materials as yet another set of labels like objects. Materials are, however, fundamentally different from objects as they have no inherent shape or defined spatial extent. Approaches that ignore this can only take advantage of limited implicit context as it appears during training. We instead show that recognizing materials purely from their local appearance and integrating separately recognized global contextual cues including objects and places leads to superior dense, per-pixel, material recognition. We achieve this by training a fully-convolutional material recognition network end-to-end with only material category supervision. We integrate object and place estimates to this network from independent CNNs. This approach avoids the necessity of preparing an impractically-large amount of training data to cover the product space of materials, objects, and scenes, while fully leveraging contextual cues for dense material recognition. Furthermore, we perform a detailed analysis of the effects of context granularity, spatial resolution, and the network level at which we introduce context. On a recently introduced comprehensive and diverse material database Schwartz2016 , we confirm that our method achieves state-of-the-art accuracy with significantly less training data compared to past methods. | The use of context as a means to reduce ambiguity, whether in materials or other cases, appears promising. Hu al @cite_6 showed that a simple addition of object category predictions as features could potentially improve material recognition. On an unrelated topic, Iizuka al @cite_15 use scene place category predictions to improve the accuracy of greyscale image colorization. Our work, in contrast to these previous methods, takes advantage of multiple sources of context and investigates the ideal granularity of context categories. Within the framework of a Convolutional Neural Network (CNN), we evaluate how the hierarchical level at which we introduce context influences the accuracy of the corresponding material predictions. | {
"cite_N": [
"@cite_15",
"@cite_6"
],
"mid": [
"2461158874",
"2075019799"
],
"abstract": [
"We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"Material recognition is a fundamental problem in perception that is receiving increasing attention. Following the recent work using Flickr [16, 23], we empirically study material recognition of real-world objects using a rich set of local features. We use the Kernel Descriptor framework [5] and extend the set of descriptors to include materialmotivated attributes using variances of gradient orientation and magnitude. Large-Margin Nearest Neighbor learning is used for a 30-fold dimension reduction. We improve the state-of-the-art accuracy on the Flickr dataset [16] from 45 to 54 . We also introduce two new datasets using ImageNet and macro photos, extensively evaluating our set of features and showing promising connections between material and object recognition."
]
} |
1611.09498 | 2559387027 | Structure from motion algorithms have an inherent limitation that the reconstruction can only be determined up to the unknown scale factor. Modern mobile devices are equipped with an inertial measurement unit (IMU), which can be used for estimating the scale of the reconstruction. We propose a method that recovers the metric scale given inertial measurements and camera poses. In the process, we also perform a temporal and spatial alignment of the camera and the IMU. Therefore, our solution can be easily combined with any existing visual reconstruction software. The method can cope with noisy camera pose estimates, typically caused by motion blur or rolling shutter artifacts, via utilizing a Rauch-Tung-Striebel (RTS) smoother. Furthermore, the scale estimation is performed in the frequency domain, which provides more robustness to inaccurate sensor time stamps and noisy IMU samples than the previously used time domain representation. In contrast to previous methods, our approach has no parameters that need to be tuned for achieving a good performance. In the experiments, we show that the algorithm outperforms the state-of-the-art in both accuracy and convergence speed of the scale estimate. The accuracy of the scale is around @math from the ground truth depending on the recording. We also demonstrate that our method can improve the scale accuracy of the Project Tango's build-in motion tracking. | The fusion of visual and inertial measurements has been a popular research topic in the robotics community. Most previous systems are focused on real-time tracking and navigation, e.g. @cite_20 @cite_28 @cite_12 @cite_30 @cite_24 @cite_1 @cite_25 @cite_9 @cite_6 @cite_10 . These approaches require tightly integrated sensor fusion, which places requirements for the hardware. For example, the synchronization of individual video frames and IMU sensor timestamps must be relatively accurate, and often the used IMUs are of notably better quality than standard smartphone IMUs, which are not aimed for inertial navigation purposes. In fact, many of the previous approaches utilize special hardware setups. For instance, both @cite_34 and @cite_17 use a similar synchronized IMU and stereo camera hardware prototype. Further, also @cite_15 @cite_13 @cite_18 @cite_11 use custom-made camera-IMU hardware. Finally, perhaps the most well-known example of a specialized hardware platform for visual-inertial odometry is the Google Tango tablet device which utilizes a fish-eye lens camera @cite_38 . Regarding Google Tango device, it should be noticed that the implementation is proprietary and not openly documented, and hence it is difficult to analyze whether similar performance could be realized with more conventional smartphone hardware. | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_18",
"@cite_11",
"@cite_38",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_34",
"@cite_24",
"@cite_15",
"@cite_20",
"@cite_10",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2056298239",
"2411649036",
"",
"",
"",
"",
"",
"",
"2091790851",
"",
"2051349034",
"2620813924",
"",
"",
"",
"2411412724"
],
"abstract": [
"",
"This work investigates the relationship between system observability properties and estimator inconsistency for a Vision-aided Inertial Navigation System (VINS). In particular, first we introduce a new methodology for determining the unobservable directions of nonlinear systems by factorizing the observability matrix according to the observable and unobservable modes. Subsequently, we apply this method to the VINS nonlinear model and determine its unobservable directions analytically. We leverage our analysis to improve the accuracy and consistency of linearized estimators applied to VINS. Our key findings are evaluated through extensive simulations and experimental validation on real-world data, demonstrating the superior accuracy and consistency of the proposed VINS framework compared to standard approaches.",
"The so-called direct visual SLAM methods have shown a great potential in estimating a semidense or fully dense reconstruction of the scene, in contrast to the sparse reconstructions of the traditional feature-based algorithms. In this paper, we propose for the first time a direct, tightly-coupled formulation for the combination of visual and inertial data. Our algorithm runs in real-time on a standard CPU. The processing is split in three threads. The first thread runs at frame rate and estimates the camera motion by a joint non-linear optimization from visual and inertial data given a semidense map. The second one creates a semidense map of high-gradient areas only for camera tracking purposes. Finally, the third thread estimates a fully dense reconstruction of the scene at a lower frame rate. We have evaluated our algorithm in several real sequences with ground truth trajectory data, showing a state-of-the-art performance.",
"",
"",
"",
"",
"",
"",
"Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual-inertial odometry or simultaneous localization and mapping SLAM. While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.",
"",
"In this paper, we study estimator inconsistency in vision-aided inertial navigation systems (VINS) from the standpoint of system's observability. We postulate that a leading cause of inconsistency is the gain of spurious information along unobservable directions, which results in smaller uncertainties, larger estimation errors, and divergence. We develop an observability constrained VINS (OC-VINS), which explicitly enforces the unobservable directions of the system, hence preventing spurious information gain and reducing inconsistency. This framework is applicable to several variants of the VINS problem such as visual simultaneous localization and mapping (V-SLAM), as well as visual-inertial odometry using the multi-state constraint Kalman filter (MSC-KF). Our analysis, along with the proposed method to reduce inconsistency, are extensively validated with simulation trials and real-world experimentation.",
"Obtaining reliable state estimates at high altitude but GPS-denied environments, such as between high-rise buildings or in the middle of deep canyons, is known to be challenging, due to the lack of direct distance measurements. Monocular visual-inertial systems provide a possible way to recover the metric distance through proper integration of visual and inertial measurements. However, the nonlinear optimization problem for state estimation suffers from poor numerical conditioning or even degeneration, due to difficulties in obtaining observations of visual features with sufficient parallax, and the excessive period of inertial measurement integration. In this paper, we propose a spline-based high altitude estimator initialization method for monocular visual-inertial navigation system (VINS) with special attention to the numerical issues. Our formulation takes only inertial measurements that contain sufficient excitation, and drops uninformative measurements such as those obtained during hovering. In addition, our method explicitly reduces the number of parameters to be estimated in order to achieve earlier convergence. Based on the initialization results, a complete closed-loop system is constructed for high altitude navigation. Extensive experiments are conducted to validate our approach.",
"",
"",
"",
"We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU."
]
} |
1611.09498 | 2559387027 | Structure from motion algorithms have an inherent limitation that the reconstruction can only be determined up to the unknown scale factor. Modern mobile devices are equipped with an inertial measurement unit (IMU), which can be used for estimating the scale of the reconstruction. We propose a method that recovers the metric scale given inertial measurements and camera poses. In the process, we also perform a temporal and spatial alignment of the camera and the IMU. Therefore, our solution can be easily combined with any existing visual reconstruction software. The method can cope with noisy camera pose estimates, typically caused by motion blur or rolling shutter artifacts, via utilizing a Rauch-Tung-Striebel (RTS) smoother. Furthermore, the scale estimation is performed in the frequency domain, which provides more robustness to inaccurate sensor time stamps and noisy IMU samples than the previously used time domain representation. In contrast to previous methods, our approach has no parameters that need to be tuned for achieving a good performance. In the experiments, we show that the algorithm outperforms the state-of-the-art in both accuracy and convergence speed of the scale estimate. The accuracy of the scale is around @math from the ground truth depending on the recording. We also demonstrate that our method can improve the scale accuracy of the Project Tango's build-in motion tracking. | Nevertheless, there are some approaches which utilize standard smartphone sensors for motion tracking and metric reconstruction @cite_37 @cite_32 . In @cite_32 the authors report that the recovered scale was estimated to have an error of up to 10-15 Besides placing specific requirements for the hardware, tightly integrated fusion of visual and inertial measurements is a challenging task and leads to relatively complex designs as one needs to solve two difficult problems, visual odometry and inertial navigation, simultaneously. We believe that this complexity partially explains why many of the aforementioned state-of-the-art visual-inertial odometry methods are not available as open-source implementation. | {
"cite_N": [
"@cite_37",
"@cite_32"
],
"mid": [
"1987441863",
"2025572378"
],
"abstract": [
"All existing methods for vision-aided inertial navigation assume a camera with a global shutter, in which all the pixels in an image are captured simultaneously. However, the vast majority of consumer-grade cameras use rolling-shutter sensors, which capture each row of pixels at a slightly different time instant. The effects of the rolling shutter distortion when a camera is in motion can be very significant, and are not modelled by existing visual-inertial motion-tracking methods. In this paper we describe the first, to the best of our knowledge, method for vision-aided inertial navigation using rolling-shutter cameras. Specifically, we present an extended Kalman filter (EKF)-based method for visual-inertial odometry, which fuses the IMU measurements with observations of visual feature tracks provided by the camera. The key contribution of this work is a computationally tractable approach for taking into account the rolling-shutter effect, incurring only minimal approximations. The experimental results from the application of the method show that it is able to track, in real time, the position of a mobile phone moving in an unknown environment with an error accumulation of approximately 0.8 of the distance travelled, over hundreds of meters.",
"In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability."
]
} |
1611.09498 | 2559387027 | Structure from motion algorithms have an inherent limitation that the reconstruction can only be determined up to the unknown scale factor. Modern mobile devices are equipped with an inertial measurement unit (IMU), which can be used for estimating the scale of the reconstruction. We propose a method that recovers the metric scale given inertial measurements and camera poses. In the process, we also perform a temporal and spatial alignment of the camera and the IMU. Therefore, our solution can be easily combined with any existing visual reconstruction software. The method can cope with noisy camera pose estimates, typically caused by motion blur or rolling shutter artifacts, via utilizing a Rauch-Tung-Striebel (RTS) smoother. Furthermore, the scale estimation is performed in the frequency domain, which provides more robustness to inaccurate sensor time stamps and noisy IMU samples than the previously used time domain representation. In contrast to previous methods, our approach has no parameters that need to be tuned for achieving a good performance. In the experiments, we show that the algorithm outperforms the state-of-the-art in both accuracy and convergence speed of the scale estimate. The accuracy of the scale is around @math from the ground truth depending on the recording. We also demonstrate that our method can improve the scale accuracy of the Project Tango's build-in motion tracking. | The closest previous works to ours are the papers by @cite_14 @cite_26 which address the same problem in a similar context. That is, they also apply off-the-shelf visual tracking software to recover the camera poses up to scale and thereafter fix the metric scale based on inertial measurements. However, in contrast to Ham's approach we do not assume that the relative orientation of the camera and IMU must be known a priori. Further, we propose scale estimation by matching accelarations from visual and inertial sensors in frequency domain instead of time domain. We compare our frequency-based approach both to Ham's original implementation and to our own implementation of Ham's time domain approach. The results show that our approach has better accuracy and faster convergence of the scale estimate. | {
"cite_N": [
"@cite_14",
"@cite_26"
],
"mid": [
"207344062",
"2298348247"
],
"abstract": [
"This paper presents a novel solution to the metric reconstruction of objects using any smart device equipped with a camera and an inertial measurement unit (IMU). We propose a batch, vision centric approach which only uses the IMU to estimate the metric scale of a scene reconstructed by any algorithm with Structure from Motion like (SfM) output. IMUs have a rich history of being combined with monocular vision for robotic navigation and odometry applications. These IMUs require sophisticated and quite expensive hardware rigs to perform well. IMUs in smart devices, however, are chosen for enhancing interactivity - a task which is more forgiving to noise in the measurements. We anticipate, however, that the ubiquity of these “noisy” IMUs makes them increasingly useful in modern computer vision algorithms. Indeed, we show in this work how an IMU from a smart device can help a face tracker to measure pupil distance, and an SfM algorithm to measure the metric size of objects. We also identify motions that produce better results, and develop a heuristic for estimating, in real-time, when enough data has been collected for an accurate scale estimation.",
"This paper presents a novel solution to the metric, scaled reconstruction of objects using any smart device equipped with a camera and an inertial measurement unit (IMU). We propose a batch, vision centric approach which only uses the IMU to estimate the metric scale of a scene reconstructed by any algorithm with Structure from Motion like (SfM) output. IMUs have a rich history of being combined with monocular vision for robotic navigation and odometry applications. These IMUs require sophisticated and quite expensive hardware rigs to perform well. IMUs in smart devices, however, are chosen for enhancing interactivity—a task which is more forgiving to noise in the measurements. We anticipate, however, that the ubiquity of these “noisy” IMUs makes them increasingly useful in modern computer vision algorithms. Indeed, we show in this work how an IMU from a smart device can help a face tracker to measure pupil distance, and an SfM algorithm to measure the metric size of objects. We also identify motions that produce better results and, using a high frame rate camera, gain insight to how the performance of our method is affected by the quality of the tracking output."
]
} |
1611.09180 | 2559100081 | Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buys and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate price. However, it depends the design and calculation of a complex economic related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this work, we employ a Recurrent Neural Network (RNN) to predict real estate price using the state-of-the-art visual features. The experimental results indicate that our model outperforms several of other state-of-the-art baseline algorithms in terms of both mean absolute error (MAE) and mean absolute percentage error (MAPE). | Real estate appraisal has been studied by both real estate industrial professionals and academia researchers. Earlier work focused on building price indexes for real properties. The seminal work in @cite_28 built price index according to the repeat prices of the same property at different times. They employed regression analysis to build the price index, which shows good performances. Another widely used regression model, Hedonic regression, is developed on the assumption that the characteristics of a house can predict its price @cite_14 @cite_21 . However, it is argued that the Hedonic regression model requires more assumptions in terms of explaining its target @cite_20 . They also mentioned that for repeat sales model, the main problem is lack of data, which may lead to failure of the model. Recent work in @cite_39 employed locations and sale price series to build an autoregressive component. Their model is able to use both single sale homes and repeat sales homes, which can offer a more robust sale price index. | {
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_21",
"@cite_39",
"@cite_20"
],
"mid": [
"1978712773",
"2109109711",
"",
"2120515048",
"2088153536"
],
"abstract": [
"Parametric specifications for hedonic price equations are estimated using a data set from Alameda and San Francisco Counties and are compared to estimates using a nonparametric technique called locally weighted regression, LWR. LWR permits flexible estimation of the hedonic's curvature at median attributes and is less sensitive than standard regression techniques to the influence of unusual observations. The technique also avoids imposing a single functional form across time and municipalities. The LWR estimates of municipality-specific hedonics are then used to obtain implicit prices for housing attributes and to derive municipality-specific price indices. The results of extensive diagnostic checks of our technique are also reported. Copyright American Real Estate and Urban Economics Association.",
"Abstract Quality differences make estimation of price indexes for real properties difficult, but these can be largely avoided by basing an index on sales prices of the same property at different times. The problem of combining price relatives of repeat sales of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. This method of estimation is more efficient than others for combining price relatives in that it utilizes information about the price index for earlier periods contained in sales prices in later periods. Standard errors of the estimated index numbers can be readily computed using the regression method, and it permits certain effects on the value of real properties to be eliminated from the index.",
"",
"A statistical model for predicting individual house prices is proposed utilizing only information regarding sale price, time of sale, and location (ZIP code). This model is composed of a xed time eect and a random ZIP (postal) code eect combined with an autoregressive component. The latter piece is applied only to homes sold repeatedly while the former two components are applied to all of the data. In addition, the autoregressive component incorporates heteroscedasticity in the errors. To evaluate the proposed model, single-family home sales for twenty U.S. metropolitan areas from July 1985 through September 2004 are analyzed. The model is shown to have better predictive abilities than the benchmark S&P Case-Shiller model, which is a repeat sales model, and a conventional mixed eects model. It is also shown that the time eect in the proposed model can be converted into a house price index. Finally, the special case of Los Angeles, CA is discussed as an example of history repeating itself in regards to the current housing market meltdown.",
"Abstract Since the seminal work of M. Bailey, R. Muth, and H. Nourse (1963, J. Amer. Statist. Assoc. 58, 933–942), numerous articles have been written about repeat sales and other methods for constructing house price indices. Our justification for producing yet another paper on this subject is to reemphasize fundamentals. We focus on the basic building blocks—asking questions about what the underlying target is, how repeat sales goes about estimating the target, and when a particular index might be used in practice—rather than on more complex, higher level concerns such as statistical or modeling accuracy. We find that much of the debate over index methodology can be distilled to implicit and largely unrecognized disagreement over the desired target or the intended application. Consequently, we contend that paying greater heed to fundamental questions offers significant rewards to both researchers and practitioners."
]
} |
1611.09180 | 2559100081 | Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buys and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate price. However, it depends the design and calculation of a complex economic related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this work, we employ a Recurrent Neural Network (RNN) to predict real estate price using the state-of-the-art visual features. The experimental results indicate that our model outperforms several of other state-of-the-art baseline algorithms in terms of both mean absolute error (MAE) and mean absolute percentage error (MAPE). | More studies are conducted on employing feed forward neural networks for real estate appraisal @cite_26 @cite_9 @cite_2 @cite_29 . However, their results suggest that neural network models are unstable even using the same package with different run times @cite_26 . The performance of neural networks are closely related to the features and data size @cite_29 . Recently, Kontrimas and Verikas @cite_42 empirically studied several different models on selected @math dimensional features, type of the house, size, and construction year. Their results show that linear regression outperforms neural network on their selected @math houses. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_9",
"@cite_42",
"@cite_2"
],
"mid": [
"1754495824",
"1537400860",
"2037056888",
"2070300253",
""
],
"abstract": [
"This research applies neural network (NN) technology to real estate appraisal and compares the performance of two NN models in estimating the sales price of residential properties with a traditional multiple regression model. The study is based on 288 sales of homes in Fort Collins, Colorado. Results do not support previous findings that NNs are a superior tool for appraisal analysis. Furthermore, significant problems were encountered with the NN models: inconsistent results between packages, inconsistent results between runs of the same package, and long run times. Any appraiser who plans on using this new technology would do so with caution.",
"This paper compares the predictive performance of artificial neural networks (ANN) and multiple regression analysis (MRA) for single family housing sales. Multiple comparisons are made between the two data models in which the data sample size is varied, the funcional specifications is varied, and the temporal prediction is varied. We conclude that ANN performs better than MRA when a moderate to large data sample size is used. For our application, this \"moderate to large data sample size\" varied from 13 to 39 of the total data sample (506 to 1506 observations out of 3906 total observations). Our results give a plausible explanation why previous papers have obtained varied results when comparing MRA and ANN predictive performance for housing values.",
"We use hedonic analysis of home transaction data from the Minneapolis–St. Paul metropolitan area to estimate the effects of proximity to open space on sales price. We allow the effects of proximity to vary with demographic and location-specific characteristics and include fixed effects to control for observed and unobserved neighborhood characteristics. We find that the value of proximity to open space is higher in neighborhoods that are dense, near the central business district, high-income, high-crime, or home to many children. Using the metropolitan area's average value may substantially overestimate or underestimate the value of open space in particular neighborhoods.",
"Mass appraisal is the systematic appraisal of groups of properties as of a given date using standardized procedures and statistical testing. Mass appraisal is commonly used to compute real estate tax. There are three traditional real estate valuation methods: the sales comparison approach, income approach, and the cost approach. Mass appraisal models are commonly based on the sales comparison approach. The ordinary least squares (OLS) linear regression is the classical method used to build models in this approach. The method is compared with computational intelligence approaches - support vector machine (SVM) regression, multilayer perceptron (MLP), and a committee of predictors in this paper. All the three predictors are used to build a weighted data-depended committee. A self-organizing map (SOM) generating clusters of value zones is used to obtain the data-dependent aggregation weights. The experimental investigations performed using data cordially provided by the Register center of Lithuania have shown very promising results. The performance of the computational intelligence-based techniques was considerably higher than that obtained using the official real estate models of the Register center. The performance of the committee using the weights based on zones obtained from the SOM was also higher than of that exploiting the real estate value zones provided by the Register center.",
""
]
} |
1611.09180 | 2559100081 | Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buys and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate price. However, it depends the design and calculation of a complex economic related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this work, we employ a Recurrent Neural Network (RNN) to predict real estate price using the state-of-the-art visual features. The experimental results indicate that our model outperforms several of other state-of-the-art baseline algorithms in terms of both mean absolute error (MAE) and mean absolute percentage error (MAPE). | More recent studies in @cite_24 propose a ranking objective, which takes geographical individual, peer and zone dependencies into consideration. Their method is able to use various estate related data, which helps improve their ranking results based on properties' investment values. Furthermore, the work in @cite_38 studied online user's reviews and mobile users' moving behaviors on the problem of real estate ranking. Their proposed sparsity regularized learning model demonstrated competitive performance. | {
"cite_N": [
"@cite_24",
"@cite_38"
],
"mid": [
"2139846410",
"1988134474"
],
"abstract": [
"It is traditionally a challenge for home buyers to understand, compare and contrast the investment values of real estates. While a number of estate appraisal methods have been developed to value real property, the performances of these methods have been limited by the traditional data sources for estate appraisal. However, with the development of new ways of collecting estate-related mobile data, there is a potential to leverage geographic dependencies of estates for enhancing estate appraisal. Indeed, the geographic dependencies of the value of an estate can be from the characteristics of its own neighborhood (individual), the values of its nearby estates (peer), and the prosperity of the affiliated latent business area (zone). To this end, in this paper, we propose a geographic method, named ClusRanking, for estate appraisal by leveraging the mutual enforcement of ranking and clustering power. ClusRanking is able to exploit geographic individual, peer, and zone dependencies in a probabilistic ranking model. Specifically, we first extract the geographic utility of estates from geography data, estimate the neighborhood popularity of estates by mining taxicab trajectory data, and model the influence of latent business areas via ClusRanking. Also, we use a linear model to fuse these three influential factors and predict estate investment values. Moreover, we simultaneously consider individual, peer and zone dependencies, and derive an estate-specific ranking likelihood as the objective function. Finally, we conduct a comprehensive evaluation with real-world estate related data, and the experimental results demonstrate the effectiveness of our method.",
"Ranking residential real estates based on investment values can provide decision making support for home buyers and thus plays an important role in estate marketplace. In this paper, we aim to develop methods for ranking estates based on investment values by mining users' opinions about estates from online user reviews and offline moving behaviors (e.g., Taxi traces, smart card transactions, check-ins). While a variety of features could be extracted from these data, these features are Interco related and redundant. Thus, selecting good features and integrating the feature selection into the fitting of a ranking model are essential. To this end, in this paper, we first strategically mine the fine-grained discrminative features from user reviews and moving behaviors, and then propose a probabilistic sparse pair wise ranking method for estates. Specifically, we first extract the explicit features from online user reviews which express users' opinions about point of interests (POIs) near an estate. We also mine the implicit features from offline moving behaviors from multiple perspectives (e.g., Direction, volume, velocity, heterogeneity, topic, popularity, etc.). Then we learn an estate ranking predictor by combining a pair wise ranking objective and a sparsity regularization in a unified probabilistic framework. And we develop an effective solution for the optimization problem. Finally, we conduct a comprehensive performance evaluation with real world estate related data, and the experimental results demonstrate the competitive performance of both features and the proposed model."
]
} |
1611.09194 | 2960980339 | In the light of regularized dynamic time warping kernels, this paper re-considers the concept of time elastic centroid for a setof time series. We derive a new algorithm based on a probabilistic interpretation of kernel alignment matrices. This algorithm expressesthe averaging process in terms of a stochastic alignment automata. It uses an iterative agglomerative heuristic method for averagingthe aligned samples, while also averaging the times of occurrence of the aligned samples. By comparing classification accuracies for45 heterogeneous time series datasets obtained by first nearest centroid medoid classifiers we show that: i) centroid-basedapproaches significantly outperform medoid-based approaches, ii) for the considered datasets, our algorithm that combines averagingin the sample space and along the time axes, emerges as the most significantly robust model for time-elastic averaging with apromising noise reduction capability. We also demonstrate its benefit in an isolated gesture recognition experiment and its ability tosignificantly reduce the size of training instance sets. Finally we highlight its denoising capability using demonstrative synthetic data:we show that it is possible to retrieve, from few noisy instances, a signal whose components are scattered in a wide spectral band. | Time series averaging in the context of (multiple) time elastic distance alignments has been mainly addressed in the scope of the Dynamic Time Warping (DTW) measure @cite_27 , @cite_21 . Although other time elastic distance measures such as the Edit Distance With Real Penalty (ERP) @cite_11 or the Time Warp Edit Distance (TWED) @cite_31 could be considered instead, without loss of generality, we remain focused throughout this paper on DTW and its kernelization. | {
"cite_N": [
"@cite_27",
"@cite_21",
"@cite_31",
"@cite_11"
],
"mid": [
"1986092967",
"54230203",
"2143325592",
"1597504361"
],
"abstract": [
"Experiments on the automatic recognition of 203 Russian words are described. The experimental vocabulary includes terms of the language, ALGOL -60 together with others. The logarithmic characteristics of acoustic signal in five bands are extracted as features. The measure of similarity between the words of standard and control sequences is calculated by the words maximizing a definite functional using dynamic programming. The average reliability of recognition for one speaker obtained for experiments using 5000 words is 0·95. The computational time for recognition is 2-4 sec.",
"",
"In a way similar to the string-to-string correction problem, we address discrete time series similarity in light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of edit operations needed to transform one time series into another. To define the edit operations, we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call time warp edit distance (TWED). TWED is slightly different in form from dynamic time warping (DTW), longest common subsequence (LCSS), or edit distance with real penalty (ERP) algorithms. In particular, it highlights a parameter that controls a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a potentially useful metric in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to link the matching of time series into down sampled representation spaces to the matching into the original space. The empiric quality of the TWED distance is evaluated on a simple classification task. Compared to edit distance, DTW, LCSS, and ERP, TWED has proved to be quite effective on the considered experimental task.",
"Existing studies on time series are based on two categories of distance functions. The first category consists of the Lp-norms. They are metric distance functions but cannot support local time shifting. The second category consists of distance functions which are capable of handling local time shifting but are nonmetric. The first contribution of this paper is the proposal of a new distance function, which we call ERP (\"Edit distance with Real Penalty\"). Representing a marriage of L1- norm and the edit distance, ERP can support local time shifting, and is a metric. The second contribution of the paper is the development of pruning strategies for large time series databases. Given that ERP is a metric, one way to prune is to apply the triangle inequality. Another way to prune is to develop a lower bound on the ERP distance. We propose such a lower bound, which has the nice computational property that it can be efficiently indexed with a standard B+- tree. Moreover, we show that these two ways of pruning can be used simultaneously for ERP distances. Specifically, the false positives obtained from the B+-tree can be further minimized by applying the triangle inequality. Based on extensive experimentation with existing benchmarks and techniques, we show that this combination delivers superb pruning power and search time performance, and dominates all existing strategies."
]
} |
1611.09194 | 2960980339 | In the light of regularized dynamic time warping kernels, this paper re-considers the concept of time elastic centroid for a setof time series. We derive a new algorithm based on a probabilistic interpretation of kernel alignment matrices. This algorithm expressesthe averaging process in terms of a stochastic alignment automata. It uses an iterative agglomerative heuristic method for averagingthe aligned samples, while also averaging the times of occurrence of the aligned samples. By comparing classification accuracies for45 heterogeneous time series datasets obtained by first nearest centroid medoid classifiers we show that: i) centroid-basedapproaches significantly outperform medoid-based approaches, ii) for the considered datasets, our algorithm that combines averagingin the sample space and along the time axes, emerges as the most significantly robust model for time-elastic averaging with apromising noise reduction capability. We also demonstrate its benefit in an isolated gesture recognition experiment and its ability tosignificantly reduce the size of training instance sets. Finally we highlight its denoising capability using demonstrative synthetic data:we show that it is possible to retrieve, from few noisy instances, a signal whose components are scattered in a wide spectral band. | A single alignment path is required to calculate the time elastic centroid of a pair of time series (Def. ). However, multiple path alignments need to be considered to evaluate the centroid of a larger set of time series. Multiple alignments have been widely studied in bioinformatics @cite_16 , and it has been shown that determining the optimal alignment of a set of sequences under the sum of all pairs (SP) score scheme is a NP-complete problem @cite_6 @cite_29 . The time and space complexity of this problem is @math , where @math is the number of sequences in the set and @math is the length of the sequences when using dynamic programming to search for an optimal solution @cite_1 . This latter result applies to the estimation of the time elastic centroid of a set of @math time series with respect to the DTW measure. Since the search for an optimal solution becomes rapidly intractable with increasing @math , sub-optimal heuristic solutions have been subsequently proposed, most of them falling into one of the following three categories. | {
"cite_N": [
"@cite_29",
"@cite_16",
"@cite_1",
"@cite_6"
],
"mid": [
"2053663417",
"2908362408",
"1974326986",
"2002638840"
],
"abstract": [
"It is shown that the multiple alignment problem with SP-score is NP-hard for each scoring matrix in a broad class M that includes most scoring matrices actually used in biological applications. The problem remains NP-hard even if sequences can only be shifted relative to each other and no internal gaps are allowed. It is also shown that there is a scoring matrix M0 such that the multiple alignment problem for M0 is MAX-SNP-hard, regardless of whether or not internal gaps are allowed.",
"",
"The study and comparison of sequences of characters from a finite alphabet is relevant to various areas of science, notably molecular biology. The measurement of sequence similarity involves the consideration of the different possible sequence alignments in order to find an optimal one for which the “distance” between sequences is minimum. By associating a path in a lattice to each alignment, a geometric insight can be brought into the problem of finding an optimal alignment. This problem can then be solved by applying a dynamic programming algorithm. However, the computational effort grows rapidly with the number N of sequences to be compared @math , where l is the mean length of the sequences to be compared).It is proved here that knowledge of the measure of an arbitrarily chosen alignment can be used in combination with information from the pairwise alignments to considerably restrict the size of the region of the lattice in consideration. This reduction implies fewer computations and less memory ...",
"ABSTRACT We study the computational complexity of two popular problems in multiple sequence alignment: multiple alignment with SP-score and multiple tree alignment. It is shown that the first problem is NP-complete and the second is MAX SNP-hard. The complexity of tree alignment with a given phylogeny is also considered."
]
} |
1611.09030 | 2557818466 | Computing the frustration index of a signed graph is a key to solving problems in different fields of research including social networks, physics, material science, and biology. In social networks the frustration index determines network distance from a state of structural balance. Although the definition of frustration index goes back to 1960, an exact algorithmic computation method has not yet been proposed. The main reason seems to be the complexity of computing the frustration index which is closely related to well-known NP-hard problems such as MAXCUT. New quadratic and linear binary programming models are developed to compute the frustration index exactly. We introduce several speed-up techniques involving prioritised branching, local search heuristics, and valid inequalities inferred from graph structural properties. The computational improvements achieved by implementing the speed-up techniques allow us to calculate the exact values of the frustration index by running the optimisation models in Gurobi solver. The speed-up techniques make our models capable of processing graphs with thousands of nodes and edges in seconds on inexpensive hardware. The solve time and solution quality comparison against the literature shows the superiority of our models in both random and real signed networks. | For every fixed @math , there is a polynomial-time approximation scheme for the correlation clustering problem @cite_12 . For arbitrary @math , exact @cite_46 @cite_45 and heuristic methods @cite_13 @cite_44 @cite_14 are developed based on a mixed integer programming model @cite_31 . Denoting the order of a graph by @math , exact algorithms fail for @math @cite_46 and @math @cite_5 , while greedy algorithms @cite_13 and local search heuristics @cite_44 are capable of providing good solutions for @math and @math respectively. | {
"cite_N": [
"@cite_31",
"@cite_14",
"@cite_44",
"@cite_45",
"@cite_5",
"@cite_46",
"@cite_13",
"@cite_12"
],
"mid": [
"1985875030",
"",
"2044884583",
"",
"2043532570",
"2126040678",
"132148631",
"2132641109"
],
"abstract": [
"We consider the following general correlation-clustering problem [N. Bansal, A. Blum, S. Chawla, Correlation clustering, in: Proc. 43rd Annu. IEEE Symp. on Foundations of Computer Science, Vancouver, Canada, November 2002, pp. 238-250]: given a graph with real nonnegative edge weights and a 〈+〉 〈-〉 edge labelling, partition the vertices into clusters to minimize the total weight of cut 〈+〉 edges and uncut 〈-〉 edges. Thus, 〈+〉 edges with large weights (representing strong correlations between endpoints) encourage those endpoints to belong to a common cluster while 〈-〉 edges with large weights encourage the endpoints to belong to different clusters. In contrast to most clustering problems, correlation clustering specifies neither the desired number of clusters nor a distance threshold for clustering; both of these parameters are effectively chosen to be best possible by the problem definition.Correlation clustering was introduced by [Correlation clustering, in: Proc. 43rd Annu. IEEE Syrup. on Foundations of Computer Science, Vancouver, Canada, November 2002, pp. 238-250], motivated by both document clustering and agnostic learning. They proved NP-hardness and gave constant-factor approximation algorithms for the special case in which the graph is complete (full information) and every edge has the same weight. We give an O(log n)-approximation algorithm for the general case based on a linear-programming rounding and the \"region-growing\" technique. We also prove that this linear program has a gap of Ω(log n), and therefore our approximation is tight under this approach. We also give an O(r3)-approximation algorithm for Kr, r-minor-free graphs. On the other hand, we show that the problem is equivalent to minimum multicut, and therefore APX-hard and difficult to approximate better than Θ(log n).",
"",
"Evaluating balance in a social network has been a challenge for social network researchers. The degree of balance in a social group can be used as a tool to study whether and how this group evolves to a possible balanced state. In particular, the solution of the Correlation Clustering (CC) problem can be used as a criterion to measure the amount of balance in signed social networks, where positive (friendly) and negative (antagonistic) interactions take place. In this work, we provide an efficient solution of the CC problem by the use of the ILS metaheuristic. The proposed algorithm outperforms other solution strategies from literature in execution time, with the same solution quality.",
"",
"Abstract In this work, we study graph clustering problems associated with structural balance. One of these problems is known in computer science literature as the correlation-clustering (CC) problem and another (RCC) can be viewed as its relaxed version. The solution of CC and RCC problems has been previously used in the literature as tools for the evaluation of structural balance in a social network. Our aim is to solve these problems to optimality. We describe integer linear programming formulations for these problems which includes the first mathematical formulation for the RCC problem. We also discuss alternative models for the relaxed structural balance and the solution of clustering problems associated with these new models. Numerical experiments are carried out with each formulation on a set of benchmark instances available in the literature.",
"",
"One challenge for social network researchers is to evaluate balance in a social network. The degree of balance in a social group can be used as a tool to study whether and how this group evolves to a possible balanced state. The solution of clustering problems defined on signed graphs can be used as a criterion to measure the degree of balance in social networks. By considering the original definition of the structural balance, the optimal solution of the Correlation Clustering (CC) Problem arises as one possible measure. In this work, we contribute to the efficient solution of the CC problem by developing sequential and parallel GRASP metaheuristics. Then, by using our GRASP algorithms, we solve the problem of measuring the structural balance of large social networks.",
"We continue the investigation of problems concerning correlation clustering or clustering with qualitative information, which is a clustering formulation that has been studied recently [5, 7, 8, 3]. The basic setup here is that we are given as input a complete graph on n nodes (which correspond to nodes to be clustered) whose edges are labeled + (for similar pairs of items) and - (for dissimilar pairs of items). Thus we have only as input qualitative information on similarity and no quantitative distance measure between items. The quality of a clustering is measured in terms of its number of agreements, which is simply the number of edges it correctly classifies, that is the sum of number of - edges whose endpoints it places in different clusters plus the number of + edges both of whose endpoints it places within the same cluster.In this paper, we study the problem of finding clusterings that maximize the number of agreements, and the complementary minimization version where we seek clusterings that minimize the number of disagreements. We focus on the situation when the number of clusters is stipulated to be a small constant k. Our main result is that for every k, there is a polynomial time approximation scheme for both maximizing agreements and minimizing disagreements. (The problems are NP-hard for every k ≥ 2.) The main technical work is for the minimization version, as the PTAS for maximizing agreements follows along the lines of the property tester for Max k-CUT from [13].In contrast, when the number of clusters is not specified, the problem of minimizing disagreements was shown to be APX-hard [7], even though the maximization version admits a PTAS."
]
} |
1611.09026 | 2953007522 | In this work, we explore the problem of generating fantastic special-effects for the typography. It is quite challenging due to the model diversities to illustrate varied text effects for different characters. To address this issue, our key idea is to exploit the analytics on the high regularity of the spatial distribution for text effects to guide the synthesis process. Specifically, we characterize the stylized patches by their normalized positions and the optimal scales to depict their style elements. Our method first estimates these two features and derives their correlation statistically. They are then converted into soft constraints for texture transfer to accomplish adaptive multi-scale texture synthesis and to make style element distribution uniform. It allows our algorithm to produce artistic typography that fits for both local texture patterns and the global spatial distribution in the example. Experimental results demonstrate the superiority of our method for various text effects over conventional style transfer methods. In addition, we validate the effectiveness of our algorithm with extensive artistic typography library generation. | Pioneering color transfer methods @cite_33 @cite_3 transfer color between images by matching their global color distributions. Subsequently, local color transfer is achieved based on segmentation @cite_25 @cite_18 or user interaction @cite_14 and it is further improved using fine-grained patch @cite_7 or pixel @cite_37 @cite_23 correspondences. Recently, color transfer @cite_9 and colorization @cite_8 @cite_12 using deep neural networks have drawn people's attentions. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_37",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_23",
"@cite_25",
"@cite_12"
],
"mid": [
"2125683338",
"2240798854",
"2106395586",
"2129112648",
"2019969451",
"2950064337",
"1920280450",
"2006957355",
"",
"2160530465",
""
],
"abstract": [
"We propose an automatic approach to soft color segmentation, which produces soft color segments with an appropriate amount of overlapping and transparency essential to synthesizing natural images for a wide range of image-based applications. Although many state-of-the-art and complex techniques are excellent at partitioning an input image to facilitate deriving a semantic description of the scene, to achieve seamless image synthesis, we advocate a segmentation approach designed to maintain spatial and color coherence among soft segments while preserving discontinuities by assigning to each pixel a set of soft labels corresponding to their respective color distributions. We optimize a global objective function, which simultaneously exploits the reliability given by global color statistics and flexibility of local image compositing, leading to an image model where the global color statistics of an image is represented by a Gaussian mixture model (GMM), whereas the color of a pixel is explained by a local color mixture model where the weights are defined by the soft labels to the elements of the converged GMM. Transparency is naturally introduced in our probabilistic framework, which infers an optimal mixture of colors at an image pixel. To adequately consider global and local information in the same framework, an alternating optimization scheme is proposed to iteratively solve for the global and local model parameters. Our method is fully automatic and is shown to converge to a good optimal solution. We perform extensive evaluation and comparison and demonstrate that our method achieves good image synthesis results for image-based applications such as image matting, color transfer, image deblurring, and image colorization.",
"We introduce a general technique for \"colorizing\" greyscale images by transferring color between a source, color image and a destination, greyscale image. Although the general problem of adding chromatic values to a greyscale image has no exact, objective solution, the current approach attempts to provide a method to help minimize the amount of human labor required for this task. Rather than choosing RGB colors from a palette to color individual components, we transfer the entire color \"mood\" of the source to the target image by matching luminance and texture information between the images. We choose to transfer only chromatic information and retain the original luminance values of the target image. Further, the procedure is enhanced by allowing the user to match areas of the two images with rectangular swatches. We show that this simple technique can be successfully applied to a variety of images and video, provided that texture and luminance are sufficiently distinct. The images generated demonstrate the potential and utility of our technique in a diverse set of application domains.",
"Headshot portraits are a popular subject in photography but to achieve a compelling visual style requires advanced skills that a casual photographer will not have. Further, algorithms that automate or assist the stylization of generic photographs do not perform well on headshots due to the feature-specific, local retouching that a professional photographer typically applies to generate such portraits. We introduce a technique to transfer the style of an example headshot photo onto a new one. This can allow one to easily reproduce the look of renowned artists. At the core of our approach is a new multiscale technique to robustly transfer the local statistics of an example portrait onto a new one. This technique matches properties such as the local contrast and the overall lighting direction while being tolerant to the unavoidable differences between the faces of two different people. Additionally, because artists sometimes produce entire headshot collections in a common style, we show how to automatically find a good example to use as a reference for a given portrait, enabling style transfer without the user having to search for a suitable example for each input. We demonstrate our approach on data taken in a controlled environment as well as on a large set of photos downloaded from the Internet. We show that we can successfully handle styles by a variety of different artists.",
"We use a simple statistical analysis to impose one image's color characteristics on another. We can achieve color correction by choosing an appropriate source image and apply its characteristic to another image.",
"We introduce \"time hallucination\": synthesizing a plausible image at a different time of day from an input image. This challenging task often requires dramatically altering the color appearance of the picture. In this paper, we introduce the first data-driven approach to automatically creating a plausible-looking photo that appears as though it were taken at a different time of day. The time of day is specified by a semantic time label, such as \"night\". Our approach relies on a database of time-lapse videos of various scenes. These videos provide rich information about the variations in color appearance of a scene throughout the day. Our method transfers the color appearance from videos with a similar scene as the input photo. We propose a locally affine model learned from the video for the transfer, allowing our model to synthesize new color data while retaining image details. We show that this model can hallucinate a wide range of different times of day. The model generates a large sparse linear system, which can be solved by off-the-shelf solvers. We validate our methods by synthesizing transforming photos of various outdoor scenes to four times of interest: daytime, the golden hour, the blue hour, and nighttime.",
"We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning.",
"Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.",
"This article proposes an original method for grading the colours between different images or shots. The first stage of the method is to find a one-to-one colour mapping that transfers the palette of an example target picture to the original picture. This is performed using an original and parameter free algorithm that is able to transform any N-dimensional probability density function into another one. The proposed algorithm is iterative, non-linear and has a low computational cost. Applying the colour mapping on the original picture allows reproducing the same 'feel' as the target picture, but can also increase the graininess of the original picture, especially if the colour dynamic of the two pictures is very different. The second stage of the method is to reduce this grain artefact through an efficient post-processing algorithm that intends to preserve the gradient field of the original picture.",
"",
"We address the problem of regional color transfer between two natural images by probabilistic segmentation. We use a new expectation-maximization (EM) scheme to impose both spatial and color smoothness to infer natural connectivity among pixels. Unlike previous work, our method takes local color information into consideration, and segment image with soft region boundaries for seamless color transfer and compositing. Our modified EM method has two advantages in color manipulation: first, subject to different levels of color smoothness in image space, our algorithm produces an optimal number of regions upon convergence, where the color statistics in each region can be adequately characterized by a component of a Gaussian mixture model (GMM). Second, we allow a pixel to fall in several regions according to our estimated probability distribution in the EM step, resulting in a transparency-like ratio for compositing different regions seamlessly. Hence, natural color transition across regions can be achieved, where the necessary intra-region and inter-region smoothness are enforced without losing original details. We demonstrate results on a variety of applications including image deblurring, enhanced color transfer, and colorizing gray scale images. Comparisons with previous methods are also presented.",
""
]
} |
1611.09026 | 2953007522 | In this work, we explore the problem of generating fantastic special-effects for the typography. It is quite challenging due to the model diversities to illustrate varied text effects for different characters. To address this issue, our key idea is to exploit the analytics on the high regularity of the spatial distribution for text effects to guide the synthesis process. Specifically, we characterize the stylized patches by their normalized positions and the optimal scales to depict their style elements. Our method first estimates these two features and derives their correlation statistically. They are then converted into soft constraints for texture transfer to accomplish adaptive multi-scale texture synthesis and to make style element distribution uniform. It allows our algorithm to produce artistic typography that fits for both local texture patterns and the global spatial distribution in the example. Experimental results demonstrate the superiority of our method for various text effects over conventional style transfer methods. In addition, we validate the effectiveness of our algorithm with extensive artistic typography library generation. | Efros and Lueng @cite_19 proposed a pioneering pixel-by-pixel synthesis approach based on sampling similar patches. The subsequent works improve it in quality and speed by synthesizing patches rather than pixels. To handle the overlapped regions of neighboring patches for seamlessness, Liang . @cite_29 proposed to blend patches, and Efros and Freeman @cite_28 used dynamic programming to find an optimal separatrix in overlapped regions, which is further improved via graph cut @cite_30 . Unlike previous methods that synthesize textures in a local manner, recent techniques synthesize globally using objective functions. A coherence-based function @cite_24 is proposed to synthesize textures in an iterative coarse-to-fine fashion. This method performs patch matching and voting operations alternately and achieves good local structures. It is then extended to adapt to non-stationary textures through patch geometric and photometric transformations @cite_31 @cite_5 . | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_29",
"@cite_24",
"@cite_19",
"@cite_5",
"@cite_31"
],
"mid": [
"2077786999",
"1999360130",
"2083518538",
"2164490837",
"",
"1975049209",
"1763426478"
],
"abstract": [
"In this paper we introduce a new algorithm for image and video texture synthesis. In our approach, patch regions from a sample image or video are transformed and copied to the output and then stitched together along optimal seams to generate a new (and typically larger) output. In contrast to other techniques, the size of the patch is not chosen a-priori, but instead a graph cut technique is used to determine the optimal patch region for any given offset between the input and output texture. Unlike dynamic programming, our graph cut technique for seam optimization is applicable in any dimension. We specifically explore it in 2D and 3D to perform video texture synthesis in addition to regular image synthesis. We present approximative offset search techniques that work well in conjunction with the presented patch size optimization. We show results for synthesizing regular, random, and natural images and videos. We also demonstrate how this method can be used to interactively merge different images to generate new scenes.",
"We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer — rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information.",
"We present an algorithm for synthesizing textures from an input sample. This patch-based sampling algorithm is fast and it makes high-quality texture synthesis a real-time process. For generating textures of the same size and comparable quality, patch-based sampling is orders of magnitude faster than existing algorithms. The patch-based sampling algorithm works well for a wide variety of textures ranging from regular to stochastic. By sampling patches according to a nonparametric estimation of the local conditional MRF density function, we avoid mismatching features across patch boundaries. We also experimented with documented cases for which pixel-based nonparametric sampling algorithms cease to be effective but our algorithm continues to work well.",
"This paper presents a new framework for the completion of missing information based on local structures. It poses the task of completion as a global optimization problem with a well-defined objective function and derives a new algorithm to optimize it. Missing values are constrained to form coherent structures with respect to reference examples. We apply this method to space-time completion of large space-time \"holes\" in video sequences of complex dynamic scenes. The missing portions are filled in by sampling spatio-temporal patches from the available parts of the video, while enforcing global spatio-temporal consistency between all patches in and around the hole. The consistent completion of static scene parts simultaneously with dynamic behaviors leads to realistic looking video sequences and images. Space-time video completion is useful for a variety of tasks, including, but not limited to: 1) sophisticated video removal (of undesired static or dynamic objects) by completing the appropriate static or dynamic background information. 2) Correction of missing corrupted video frames in old movies. 3) Modifying a visual story by replacing unwanted elements. 4) Creation of video textures by extending smaller ones. 5) Creation of complete field-of-view stabilized video. 6) As images are one-frame videos, we apply the method to this special case as well",
"",
"Current methods for combining two different images produce visible artifacts when the sources have very different textures and structures. We present a new method for synthesizing a transition region between two source images, such that inconsistent color, texture, and structural properties all change gradually from one source to the other. We call this process image melding. Our method builds upon a patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. And third, we propose a new energy based on mixed L2 L0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization. In several cases, our unified method outperforms previous state-of-the-art methods specifically designed for those applications.",
"PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection."
]
} |
1611.09026 | 2953007522 | In this work, we explore the problem of generating fantastic special-effects for the typography. It is quite challenging due to the model diversities to illustrate varied text effects for different characters. To address this issue, our key idea is to exploit the analytics on the high regularity of the spatial distribution for text effects to guide the synthesis process. Specifically, we characterize the stylized patches by their normalized positions and the optimal scales to depict their style elements. Our method first estimates these two features and derives their correlation statistically. They are then converted into soft constraints for texture transfer to accomplish adaptive multi-scale texture synthesis and to make style element distribution uniform. It allows our algorithm to produce artistic typography that fits for both local texture patterns and the global spatial distribution in the example. Experimental results demonstrate the superiority of our method for various text effects over conventional style transfer methods. In addition, we validate the effectiveness of our algorithm with extensive artistic typography library generation. | The idea of modelling textures using statistical measurements has led to the development of textons and its variants @cite_1 @cite_13 . Nowadays, deep-based texture synthesis @cite_0 starts trending due to the great descriptive ability of deep neural networks. Gatys . proposed to use Gram-matrix in the Convolutional Neural Networks (CNNs) feature space to represent textures @cite_0 and adapt it to style transfer by incorporating content similarities @cite_22 . This work presented the remarkable generic painting transfer technique and attracted many follow-ups in loss function improvement @cite_32 @cite_11 and algorithm acceleration @cite_16 @cite_10 . Recently, methods that replace the Gram-matrix by MRF regularizer is proposed for photographic synthesis @cite_15 and semantic texture transfer @cite_4 . Meanwhile, Generative Adversarial Networks (GANs) @cite_26 provide another idea for texture generation by using discriminator and generator networks, which iteratively improve the model by playing a minimax game. Its extension, the conditional GANs @cite_27 , fulfils the challenging task of generating images from abstract semantic labels. Li and Wand @cite_6 further showed that their Markovian GANs has certain advantages over the Gram-matrix-based methods @cite_22 @cite_10 in coherent texture preservation. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_10",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"",
"2302243225",
"2475287302",
"2952226636",
"2020459435",
"2244861693",
"2951745349",
"2161208721",
"2125389028",
"",
"",
"2127006916",
"2461455396"
],
"abstract": [
"",
"Convolutional neural networks (CNNs) have proven highly effective at image synthesis and style transfer. For most users, however, using them as tools can be a challenging task due to their unpredictable behavior that goes against common intuitions. This paper introduces a novel concept to augment such generative architectures with semantic annotations, either by manually authoring pixel labels or using existing solutions for semantic segmentation. The result is a content-aware generative algorithm that offers meaningful control over the outcome. Thus, we increase the quality of images generated by avoiding common glitches, make the results look significantly more plausible, and extend the functional range of these algorithms---whether for portraits or landscapes, etc. Applications include semantic style transfer and turning doodles with few colors into masterful paintings!",
"Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.",
"recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.",
"Recent research in texture discrimination has revealed the existence of a separate “preattentive visual system” that cannot process complex forms, yet can, almost instantaneously, without effort or scrutiny, detect differences in a few local conspicuous features, regardless of where they occur. These features, called “textons”, are elongated blobs (e.g., rectangles, ellipses, or line segments) with specific properties, including color, angular orientation, width, length, binocular and movement disparity, and flicker rate. The ends-of-lines (terminators) and crossings of line segments are also textons. Only differences in the textons or in their density (or number) can be preattentively detected while the positional relationship between neighboring textons passes unnoticed. This kind of positional information is the essence of form perception, and can be extracted only by a time-consuming and spatially restricted process that we call “focal attention”. The aperture of focal attention can be very narrow, even restricted to a minute portion of the fovea, and shifting its locus requires about 50 ms. Thus preattentive vision serves as an “early warning system” by pointing out those loci of texton differences that should be attended to. According to this theory, at any given instant the visual information intake is relatively modest.",
"A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent general-purpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at this http URL",
"This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.",
"Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. The model provides a new tool to generate stimuli for neuroscience and might offer insights into the deep representations learned by convolutional neural networks.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"",
"",
"We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures.",
"Head portraits are popular in traditional painting. Automating portrait painting is challenging as the human visual system is sensitive to the slightest irregularities in human faces. Applying generic painting techniques often deforms facial structures. On the other hand portrait painting techniques are mainly designed for the graphite style and or are based on image analogies; an example painting as well as its original unpainted version are required. This limits their domain of applicability. We present a new technique for transferring the painting from a head portrait onto another. Unlike previous work our technique only requires the example painting and is not restricted to a specific style. We impose novel spatial constraints by locally transferring the color distributions of the example painting. This better captures the painting texture and maintains the integrity of facial structures. We generate a solution through Convolutional Neural Networks and we present an extension to video. Here motion is exploited in a way to reduce temporal inconsistencies and the shower-door effect. Our approach transfers the painting style while maintaining the input photograph identity. In addition it significantly reduces facial deformations over state of the art."
]
} |
1611.09083 | 2559451495 | With the growth of user-generated content, we observe the constant rise of the number of companies, such as search engines, content aggregators, etc., that operate with tremendous amounts of web content not being the services hosting it. Thus, aiming to locate the most important content and promote it to the users, they face the need of estimating the current and predicting the future content popularity. In this paper, we approach the problem of video popularity prediction not from the side of a video hosting service, as done in all previous studies, but from the side of an operating company, which provides a popular video search service that aggregates content from different video hosting websites. We investigate video popularity prediction based on features from three primary sources available for a typical operating company: first, the content hosting provider may deliver its data via its API, second, the operating company makes use of its own search and browsing logs, third, the company crawls information about embeds of a video and links to a video page from publicly available resources on the Web. We show that video popularity prediction based on the embed and link data coupled with the internal search and browsing data significantly improves video popularity prediction based only on the data provided by the video hosting and can even adequately replace the API data in the cases when it is partly or completely unavailable. | The video hosting content and its popularity were widely investigated in recent years. In one of the most cited studies on the topic @cite_30 , researchers focused on the analysis of video views dynamics decay after its peak. Both @cite_19 and @cite_5 examined how quickly a video can become popular. They found that the most popular and viral videos receive the major part of their views in the first weeks of their existence. Another study @cite_18 also found that, in general, viral videos are mostly viewed in the first week after their upload. All the mentioned studies confirm the importance of studying and predicting video popularity in the first days of its existence. | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_5",
"@cite_18"
],
"mid": [
"2042034885",
"2070366435",
"1987986834",
"2170826378"
],
"abstract": [
"We study the relaxation response of a social system after endogenous and exogenous bursts of activity using the time series of daily views for nearly 5 million videos on YouTube. We find that most activity can be described accurately as a Poisson process. However, we also find hundreds of thousands of examples in which a burst of activity is followed by an ubiquitous power-law relaxation governing the timing of views. We find that these relaxation exponents cluster into three distinct classes and allow for the classification of collective human dynamics. This is consistent with an epidemic model on a social network containing two ingredients: a power-law distribution of waiting times between cause and action and an epidemic cascade of actions becoming the cause of future actions. This model is a conceptual extension of the fluctuation-dissipation theorem to social systems [Ruelle, D (2004) Phys Today 57:48–53] and [Roehner BM, et al, (2004) Int J Mod Phys C 15:809–834], and provides a unique framework for the investigation of timing in complex systems.",
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.",
"Understanding content popularity growth is of great importance to Internet service providers, content creators and online marketers. In this work, we characterize the growth patterns of video popularity on the currently most popular video sharing application, namely YouTube. Using newly provided data by the application, we analyze how the popularity of individual videos evolves since the video's upload time. Moreover, addressing a key aspect that has been mostly overlooked by previous work, we characterize the types of the referrers that most often attracted users to each video, aiming at shedding some light into the mechanisms (e.g., searching or external linking) that often drive users towards a video, and thus contribute to popularity growth. Our analyses are performed separately for three video datasets, namely, videos that appear in the YouTube top lists, videos removed from the system due to copyright violation, and videos selected according to random queries submitted to YouTube's search engine. Our results show that popularity growth patterns depend on the video dataset. In particular, copyright protected videos tend to get most of their views much earlier in their lifetimes, often exhibiting a popularity growth characterized by a viral epidemic-like propagation process. In contrast, videos in the top lists tend to experience sudden significant bursts of popularity. We also show that not only search but also other YouTube internal mechanisms play important roles to attract users to videos in all three datasets.",
"The sharing and re-sharing of videos on social sites, blogs e-mail, and other means has given rise to the phenomenon of viral videos--videos that become popular through internet sharing. In this paper we seek to better understand viral videos on YouTube by analyzing sharing and its relationship to video popularity using millions of YouTube videos. The socialness of a video is quantified by classifying the referrer sources for video views as social (e.g. an emailed link, Facebook referral) or non-social (e.g. a link from related videos). We find that viewership patterns of highly social videos are very different from less social videos. For example, the highly social videos rise to, and fall from, their peak popularity more quickly than less social videos. We also find that not all highly social videos become popular, and not all popular videos are highly social. By using our insights on viral videos we are able develop a method for ranking blogs and websites on their ability to spread viral videos."
]
} |
1611.09083 | 2559451495 | With the growth of user-generated content, we observe the constant rise of the number of companies, such as search engines, content aggregators, etc., that operate with tremendous amounts of web content not being the services hosting it. Thus, aiming to locate the most important content and promote it to the users, they face the need of estimating the current and predicting the future content popularity. In this paper, we approach the problem of video popularity prediction not from the side of a video hosting service, as done in all previous studies, but from the side of an operating company, which provides a popular video search service that aggregates content from different video hosting websites. We investigate video popularity prediction based on features from three primary sources available for a typical operating company: first, the content hosting provider may deliver its data via its API, second, the operating company makes use of its own search and browsing logs, third, the company crawls information about embeds of a video and links to a video page from publicly available resources on the Web. We show that video popularity prediction based on the embed and link data coupled with the internal search and browsing data significantly improves video popularity prediction based only on the data provided by the video hosting and can even adequately replace the API data in the cases when it is partly or completely unavailable. | The authors of @cite_19 studied Youtube content popularity and established linear dependence between the logarithmic views counts measured at the 10-th day and at the 30-th day after the day of the video upload. The authors of @cite_21 used the same data, but proposed to predict the future popularity by using a model of content propagation through an implicit graph induced by patterns of temporal evolution of video popularity. Prediction of the popularity peak day of a video was studied in @cite_27 . All described approaches are not applicable in our case, because, in order to predict future popularity, they exploit currently or and previously observed popularity that is not always available by the statement of our problem. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_21"
],
"mid": [
"2070366435",
"2087398646",
"1970777322"
],
"abstract": [
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.",
"Viral videos that gain popularity through the process of Internet sharing are having a profound impact on society. Existing studies on viral videos have only been on small or confidential datasets. We collect by far the largest open benchmark for viral video study called CMU Viral Video Dataset, and share it with researchers from both academia and industry. Having verified existing observations on the dataset, we discover some interesting characteristics of viral videos. Based on our analysis, in the second half of the paper, we propose a model to forecast the future peak day of viral videos. The application of our work is not only important for advertising agencies to plan advertising campaigns and estimate costs, but also for companies to be able to quickly respond to rivals in viral marketing campaigns. The proposed method is unique in that it is the first attempt to incorporate video metadata into the peak day prediction. The empirical results demonstrate that the proposed method outperforms the state-of-the-art methods, with statistically significant differences.",
"Content popularity prediction finds application in many areas, including media advertising, content caching, movie revenue estimation, traffic management and macro-economic trends forecasting, to name a few. However, predicting this popularity is difficult due to, among others, the effects of external phenomena, the influence of context such as locality and relevance to users,and the difficulty of forecasting information cascades. In this paper we identify patterns of temporal evolution that are generalisable to distinct types of data, and show that we can (1) accurately classify content based on the evolution of its popularity over time and (2) predict the value of the content's future popularity. We verify the generality of our method by testing it on YouTube, Digg and Vimeo data sets and find our results to outperform the K-Means baseline when classifying the behaviour of content and the linear regression baseline when predicting its popularity."
]
} |
1611.09083 | 2559451495 | With the growth of user-generated content, we observe the constant rise of the number of companies, such as search engines, content aggregators, etc., that operate with tremendous amounts of web content not being the services hosting it. Thus, aiming to locate the most important content and promote it to the users, they face the need of estimating the current and predicting the future content popularity. In this paper, we approach the problem of video popularity prediction not from the side of a video hosting service, as done in all previous studies, but from the side of an operating company, which provides a popular video search service that aggregates content from different video hosting websites. We investigate video popularity prediction based on features from three primary sources available for a typical operating company: first, the content hosting provider may deliver its data via its API, second, the operating company makes use of its own search and browsing logs, third, the company crawls information about embeds of a video and links to a video page from publicly available resources on the Web. We show that video popularity prediction based on the embed and link data coupled with the internal search and browsing data significantly improves video popularity prediction based only on the data provided by the video hosting and can even adequately replace the API data in the cases when it is partly or completely unavailable. | The research described in @cite_3 was devoted to prediction of future video popularity in terms of the shares of a video in online social networks like Facebook. The analogous work was made in @cite_15 @cite_0 : they collected the share data not being inside the social network company, but by receiving them from end users. Similar study of sharing behavior were carried out for Twitter @cite_4 . Further studies concentrated on more sophisticated models and features like sentiments extracted from video frames and user comments @cite_34 @cite_17 . The described methods could not be used in our work because they use either the data that is not publicly available for third parties, or the APIs of those social platforms. It means that a search engine which would decide to rely on that data would need to at least use the APIs of those services, while the goal of this study is to show to what extent an operating company can be independent from any APIs, even from seemingly more important APIs of video hosting services. | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_0",
"@cite_15",
"@cite_34",
"@cite_17"
],
"mid": [
"2072219467",
"2029535444",
"1976214674",
"2059933182",
"1977672204",
"2410819574"
],
"abstract": [
"By combining multiple social media datasets, it is possible to gain insight into each dataset that goes beyond what could be obtained with either individually. In this paper we combine user-centric data from Twitter with video-centric data from YouTube to build a rich picture of who watches and shares what on YouTube. We study 87K Twitter users, 5.6 million YouTube videos and 15 million video sharing events from user-, video- and sharing-event-centric perspectives. We show that features of Twitter users correlate with YouTube features and sharing-related features. For example, urban users are quicker to share than rural users. We find a superlinear relationship between initial Twitter shares and the final amounts of views. We discover that Twitter activity metrics play more role in video popularity than mere amount of followers. We also reveal the existence of correlated behavior concerning the time between video creation and sharing within certain timescales, showing the time onset for a coherent response, and the time limit after which collective responses are extremely unlikely. Response times depend on the category of the video, suggesting Twitter video sharing is highly dependent on the video content. To the best of our knowledge, this is the first large-scale study combining YouTube and Twitter data, and it reveals novel, detailed insights into who watches (and shares) what on YouTube, and when.",
"Popularity prediction, with both technological and economic importance, has been extensively studied for conventional video sharing sites (VSSes), where the videos are mainly found via searching, browsing, or related links. Recent statistics however suggest that online social network (OSN) users regularly share video contents from VSSes, which has contributed to a significant portion of the accesses; yet the popularity prediction in this new context remains largely unexplored. In this paper, we present an initial study on the popularity prediction of videos propagated in OSNs along friendship links. We conduct a large-scale measurement and analysis of viewing patterns of videos shared in one of largest OSNs in China, and examine the performance of typical views-based prediction models. We find that they are generally ineffective, if not totally fail, especially when predicting the early peaks and later bursts of accesses, which are common during video propagations in OSNs. To overcome these limits, we track the propagation process of videos shared in a Facebook-like OSN in China, and analyze the user viewing and sharing behaviors. We accordingly develop a novel propagation-based video popularity prediction solution, namely SoVP. Instead of relying solely on the early views for prediction, SoVP considers both the intrinsic attractiveness of a video and the influence from the underlying propagation structure. The effectiveness of SoVP, particularly for predicting the peaks and bursts, have been validated through our trace-driven experiments.",
"Multimedia streaming service providers such as YouTube often need to design caching strategy based on predicted future media content propagation pattern. It is commonly observed that social network behavior tends to have great correlation with the content propagation pattern. In this paper, we propose an Advanced Independent Cascade Model (AICM) which is an extension of the existing ICM method to predict the future propagation pattern of multimedia (YouTube) content based on scraped social network (Facebook) user profiles in terms of their “talkativeness” and “influential power”. Simulation results suggest that the proposed AICM is reasonable and consistent.",
"The recent popularity of social networking websites have resulted in a greater usage of internet bandwidth for sharing multimedia content through websites such as Facebook and YouTube. Moving large volumes of multi-media data through limited network resources remains a technical challenge to this day. The current state-of-art solution in optimizing cache server utilization depends heavily on efficient caching policies to determine content priority. This paper proposes a Fast Threshold Spread Model (FTSM) to predict the future access pattern of multi-media content based on the social information of its past viewers. The prediction results are compared and evaluated against ground truth statistics of the respective YouTube video. A complexity analysis on the proposed algorithm for large datasets along with the correlation between Facebook social sharing and YouTube global hit count are explored.",
"Video popularity prediction plays a foundational role in many aspects of life, such as recommendation systems and investment consulting. Because of its technological and economic importance, this problem has been extensively studied for years. However, four constraints have limited most related works' usability. First, most feature oriented models are inadequate in the social media environment, because many videos are published with no specific content features, such as a strong cast or a famous script. Second, many studies assume that there is a linear correlation existing between view counts from early and later days, but this is not the case in every scenario. Third, numerous works just take view counts into consideration, but discount associated sentiments. Nevertheless, it is the public opinions that directly drive a video's final success failure. Also, many related approaches rely on a network topology, but such topologies are unavailable in many situations. Here, we propose a Dual Sentimental Hawkes Process (DSHP) to cope with all the problems above. DSHP's innovations are reflected in three ways: (1) it breaks the \"Linear Correlation\" assumption, and implements Hawkes Process; (2) it reveals deeper factors that affect a video's popularity; and (3) it is topology free. We evaluate DSHP on four types of videos: Movies, TV Episodes, Music Videos, and Online News, and compare its performance against 6 widely used models, including Translation Model, Multiple Linear Regression, KNN Regression, ARMA, Reinforced Poisson Process, and Univariate Hawkes Process. Our model outperforms all of the others, which indicates a promising application prospect.",
"Hundreds of hours of videos are uploaded every minute on YouTube and other video sharing sites: some will be viewed by millions of people and other will go unnoticed by all but the uploader. In this paper we propose to use visual sentiment and content features to predict the popularity of web videos. The proposed approach outperforms current state-of-the-art methods on two publicly available datasets."
]
} |
1611.09083 | 2559451495 | With the growth of user-generated content, we observe the constant rise of the number of companies, such as search engines, content aggregators, etc., that operate with tremendous amounts of web content not being the services hosting it. Thus, aiming to locate the most important content and promote it to the users, they face the need of estimating the current and predicting the future content popularity. In this paper, we approach the problem of video popularity prediction not from the side of a video hosting service, as done in all previous studies, but from the side of an operating company, which provides a popular video search service that aggregates content from different video hosting websites. We investigate video popularity prediction based on features from three primary sources available for a typical operating company: first, the content hosting provider may deliver its data via its API, second, the operating company makes use of its own search and browsing logs, third, the company crawls information about embeds of a video and links to a video page from publicly available resources on the Web. We show that video popularity prediction based on the embed and link data coupled with the internal search and browsing data significantly improves video popularity prediction based only on the data provided by the video hosting and can even adequately replace the API data in the cases when it is partly or completely unavailable. | Prediction of web content (not only video) popularity is a well known and widely investigated problem @cite_9 . Prediction methods similar to the ones described in the previous subsection were applied to predict popularity of web pages in general @cite_19 @cite_12 @cite_14 and for popularity of news measured in comments count @cite_6 . The popularity of tweets in terms of the number of retweets and shows were studied in @cite_26 @cite_13 @cite_23 . | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_9",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_13",
"@cite_12"
],
"mid": [
"2143551222",
"2127267264",
"2061069417",
"324006174",
"2070366435",
"2172765159",
"2080290417",
"2096135266"
],
"abstract": [
"Online social media provide multiple ways to find interesting content. One important method is highlighting content recommended by user’s friends. We examine this process on one such site, the news aggregator Digg. With a stochastic model of user behavior, we distinguish the effects of the content visibility and interestingness to users. We find a wide range of interest and distinguish stories primarily of interest to a users’ friends from those of interest to the entire user community. We show how this model predicts a story’s eventual popularity from users’ early reactions to it, and estimate the prediction reliability. This modeling framework can help evaluate alternative design choices for displaying content on the site.",
"Social network services have become a viable source of information for users. In Twitter, information deemed important by the community propagates through retweets. Studying the characteristics of such popular messages is important for a number of tasks, such as breaking news detection, personalized message recommendation, viral marketing and others. This paper investigates the problem of predicting the popularity of messages as measured by the number of future retweets and sheds some light on what kinds of factors influence information propagation in Twitter. We formulate the task into a classification problem and study two of its variants by investigating a wide spectrum of features based on the content of the messages, temporal information, metadata of messages and users, as well as structural properties of the users' social graph on a large scale dataset. We show that our method can successfully predict messages which will attract thousands of retweets with good performance.",
"News articles are an engaging type of online content that captures the attention of a significant amount of Internet users. They are particularly enjoyed by mobile users and massively spread through online social platforms. As a result, there is an increased interest in discovering the articles that will become popular among users. This objective falls under the broad scope of content popularity prediction and has direct implications in the development of new services for online advertisement and content distribution. In this paper, we address the problem of predicting the popularity of news articles based on user comments. We formulate the prediction task as a ranking problem, where the goal is not to infer the precise attention that a content will receive but to accurately rank articles based on their predicted popularity. Using data obtained from two important news sites in France and Netherlands, we analyze the ranking effectiveness of two prediction models. Our results indicate that popularity prediction methods are adequate solutions for this ranking task and could be considered as a valuable alternative for automatic online news ranking.",
"",
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.",
"We study information dissemination in Twitter. We present an analysis of two important characteristics of so called retweet cascades: retweet count and show count, i.e., number of users that receive the tweet in their feed. We show that these two measures behave differently. We describe three models that are aimed to predict the audience size of a tweet: first one utilizes only the data available at the moment of the initial tweet, the second one utilizes the spread of the cascade up to some moment while the third one is an online prediction.",
"Retweet cascades play an essential role in information diffusion in Twitter. Popular tweets reflect the current trends in Twitter, while Twitter itself is one of the most important online media. Thus, understanding the reasons why a tweet becomes popular is of great interest for sociologists, marketers and social media researches. What is even more important is the possibility to make a prognosis of a tweet's future popularity. Besides the scientific significance of such possibility, this sort of prediction has lots of practical applications such as breaking news detection, viral marketing etc. In this paper we try to forecast how many retweets a given tweet will gain during a fixed time period. We train an algorithm that predicts the number of retweets during time T since the initial moment. In addition to a standard set of features we utilize several new ones. One of the most important features is the flow of the cascade. Another one is PageRank on the retweet graph, which can be considered as the measure of influence of users.",
"Popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both companies that host social media sites and their users. Accurate and timely prediction would enable the companies to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the ever-growing amount of content. Predicting popularity of content in social media, however, is challenging due to the complex interactions among content quality, how the social media site chooses to highlight content, and influence among users. While these factors make it difficult to predict popularity a priori, we show that stochastic models of user behavior on these sites allows predicting popularity based on early user reactions to new content. By incorporating aspects of the web site design, such models improve on predictions based on simply extrapolating from the early votes. We validate this claim on the social news portal Digg using a previously-developed model of social voting based on the Digg user interface."
]
} |
1611.09083 | 2559451495 | With the growth of user-generated content, we observe the constant rise of the number of companies, such as search engines, content aggregators, etc., that operate with tremendous amounts of web content not being the services hosting it. Thus, aiming to locate the most important content and promote it to the users, they face the need of estimating the current and predicting the future content popularity. In this paper, we approach the problem of video popularity prediction not from the side of a video hosting service, as done in all previous studies, but from the side of an operating company, which provides a popular video search service that aggregates content from different video hosting websites. We investigate video popularity prediction based on features from three primary sources available for a typical operating company: first, the content hosting provider may deliver its data via its API, second, the operating company makes use of its own search and browsing logs, third, the company crawls information about embeds of a video and links to a video page from publicly available resources on the Web. We show that video popularity prediction based on the embed and link data coupled with the internal search and browsing data significantly improves video popularity prediction based only on the data provided by the video hosting and can even adequately replace the API data in the cases when it is partly or completely unavailable. | The usefulness of content popularity prediction for search engines was discussed in @cite_29 @cite_16 @cite_9 where the prediction quality was estimated with ranking metrics for popularity of news articles and published jokes. Hence, in our study, we also evaluate the performance of our predictors by means of NDCG (one of the most popular ranking metrics @cite_22 ). | {
"cite_N": [
"@cite_9",
"@cite_29",
"@cite_16",
"@cite_22"
],
"mid": [
"2061069417",
"2056832611",
"2091317512",
"2069870183"
],
"abstract": [
"News articles are an engaging type of online content that captures the attention of a significant amount of Internet users. They are particularly enjoyed by mobile users and massively spread through online social platforms. As a result, there is an increased interest in discovering the articles that will become popular among users. This objective falls under the broad scope of content popularity prediction and has direct implications in the development of new services for online advertisement and content distribution. In this paper, we address the problem of predicting the popularity of news articles based on user comments. We formulate the prediction task as a ranking problem, where the goal is not to infer the precise attention that a content will receive but to accurately rank articles based on their predicted popularity. Using data obtained from two important news sites in France and Netherlands, we analyze the ranking effectiveness of two prediction models. Our results indicate that popularity prediction methods are adequate solutions for this ranking task and could be considered as a valuable alternative for automatic online news ranking.",
"News articles are a captivating type of online content that capture a significant amount of Internet users' interest. They are particularly consumed by mobile users and extremely diffused through online social platforms. As a result, there is an increased interest in promptly identifying the articles that will receive a significant amount of user attention. This task falls under the broad scope of content popularity prediction and has direct implications in various contexts such as caching strategies or online advertisement policies. In this paper we address the problem of predicting the popularity of news articles based on user comments. We formulate the prediction task into a ranking problem where the goal is not to infer the precise attention that a content will receive but to accurately rank articles based on their predicted popularity. To this end, we analyze the ranking performance of three prediction models using a dataset of articles covering a four-year period and published by 20minutes.fr, an important French online news platform. Our results indicate that prediction methods improve the ranking performance and we observed that for our dataset a simple linear prediction method outperforms more dedicated prediction methods.",
"Prediction of popular items in online content sharing systems has recently attracted a lot of attention due to the tremendous need of users and its commercial values. Different from previous works that make prediction by fitting a popularity growth model, we tackle this problem by exploiting the latent conforming and maverick personalities of those who vote to assess the quality of on-line items. We argue that the former personality prompts a user to cast her vote conforming to the majority of the service community while on the contrary the later personality makes her vote different from the community. We thus propose a Conformer-Maverick (CM) model to simulate the voting process and use it to rank top-k potentially popular items based on the early votes they received. Through an extensive experimental evaluation, we validate our ideas and find that our proposed CM model achieves better performance than baseline solutions, especially for smaller k.",
"Modern large retrieval environments tend to overwhelm their users by their large output. Since all documents are not of equal relevance to their users, highly relevant documents should be identified and ranked first for presentation. In order to develop IR techniques in this direction, it is necessary to develop evaluation approaches and methods that credit IR methods for their ability to retrieve highly relevant documents. This can be done by extending traditional evaluation methods, that is, recall and precision based on binary relevance judgments, to graded relevance judgments. Alternatively, novel measures based on graded relevance judgments may be developed. This article proposes several novel measures that compute the cumulative gain the user obtains by examining the retrieval result up to a given ranked position. The first one accumulates the relevance scores of retrieved documents along the ranked result list. The second one is similar but applies a discount factor to the relevance scores in order to devaluate late-retrieved documents. The third one computes the relative-to-the-ideal performance of IR techniques, based on the cumulative gain they are able to yield. These novel measures are defined and discussed and their use is demonstrated in a case study using TREC data: sample system run results for 20 queries in TREC-7. As a relevance base we used novel graded relevance judgments on a four-point scale. The test results indicate that the proposed measures credit IR methods for their ability to retrieve highly relevant documents and allow testing of statistical significance of effectiveness differences. The graphs based on the measures also provide insight into the performance IR techniques and allow interpretation, for example, from the user point of view."
]
} |
1611.08663 | 2520613337 | Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. Re-using the learned mapping to project target videos into an embedding space thus allows novel-classes to be recognised by nearest neighbour inference. However, existing ZSL methods suffer from auxiliary-target domain shift intrinsically induced by assuming the same mapping for the disjoint auxiliary and target classes. This compromises the generalisation accuracy of ZSL recognition on the target data. In this work, we improve the ability of ZSL to generalise across this domain shift in both model- and data-centric ways by formulating a visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes. Specifically: (1) We introduce a multi-task visual-semantic mapping to improve generalisation by constraining the semantic mapping parameters to lie on a low-dimensional manifold, (2) We explore prioritised data augmentation by expanding the pool of auxiliary data with additional instances weighted by relevance to the target domain. The proposed new model is applied to the challenging zero-shot action recognition problem to demonstrate its advantages over existing ZSL models. | Zero-shot Learning (ZSL) @cite_5 aims to generalize existing knowledge to recognize new categories without training examples by re-using a mapping learned from visual features to their semantic embeddings. Commonly used label embeddings are semantic attributes @cite_5 @cite_27 @cite_24 and word-vectors @cite_33 @cite_39 . The latter has the advantage of being learned from data without requiring manual annotation. Commonly used visual-semantic mappings include linear @cite_9 and non-linear regression @cite_24 @cite_33 @cite_39 , classification @cite_5 @cite_27 , and bilinear ranking @cite_12 . | {
"cite_N": [
"@cite_33",
"@cite_9",
"@cite_39",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_12"
],
"mid": [
"2950276680",
"1542713999",
"1805946669",
"",
"2064851185",
"2134270519",
"2044913453"
],
"abstract": [
"This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.",
"The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.",
"The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.",
"",
"In this paper we explore the idea of using high-level semantic concepts, also called attributes, to represent human actions from videos and argue that attributes enable the construction of more descriptive models for human action recognition. We propose a unified framework wherein manually specified attributes are: i) selected in a discriminative fashion so as to account for intra-class variability; ii) coherently integrated with data-driven attributes to make the attribute set more descriptive. Data-driven attributes are automatically inferred from the training data using an information theoretic approach. Our framework is built upon a latent SVM formulation where latent variables capture the degree of importance of each attribute for each action class. We also demonstrate that our attribute-based action representation can be effectively used to design a recognition procedure for classifying novel action classes for which no training samples are available. We test our approach on several publicly available datasets and obtain promising results that quantitatively demonstrate our theoretical claims.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.",
"Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results."
]
} |
1611.08663 | 2520613337 | Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. Re-using the learned mapping to project target videos into an embedding space thus allows novel-classes to be recognised by nearest neighbour inference. However, existing ZSL methods suffer from auxiliary-target domain shift intrinsically induced by assuming the same mapping for the disjoint auxiliary and target classes. This compromises the generalisation accuracy of ZSL recognition on the target data. In this work, we improve the ability of ZSL to generalise across this domain shift in both model- and data-centric ways by formulating a visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes. Specifically: (1) We introduce a multi-task visual-semantic mapping to improve generalisation by constraining the semantic mapping parameters to lie on a low-dimensional manifold, (2) We explore prioritised data augmentation by expanding the pool of auxiliary data with additional instances weighted by relevance to the target domain. The proposed new model is applied to the challenging zero-shot action recognition problem to demonstrate its advantages over existing ZSL models. | Existing ZSL methods suffer from weak generalisation due to the domain-shift induced by disjoint auxiliary-target classes, an issue that has recently been highlighted explicitly in the literature @cite_2 @cite_24 @cite_9 @cite_13 . Attempts to address this so far include post-processing heuristics @cite_24 @cite_9 @cite_13 , sparse coding regularisation @cite_2 , and simple blind enlarging of the training set with auxiliary data @cite_39 . In contrast to @cite_2 @cite_39 , we focus on: (1) Building a visual-semantic mapping with intrinsically better generalisation properties, and (2) re-weighting the auxiliary set to prioritise auxiliary instances most relevant to the target instances and classes. Our method is complementary to @cite_24 @cite_9 and can benefit from these heuristics. | {
"cite_N": [
"@cite_9",
"@cite_39",
"@cite_24",
"@cite_2",
"@cite_13"
],
"mid": [
"1542713999",
"1805946669",
"",
"1960364170",
"2250646737"
],
"abstract": [
"The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.",
"The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.",
"",
"Object recognition by zero-shot learning (ZSL) aims to recognise objects without seeing any visual examples by learning knowledge transfer between seen and unseen object classes. This is typically achieved by exploring a semantic embedding space such as attribute space or semantic word vector space. In such a space, both seen and unseen class labels, as well as image features can be embedded (projected), and the similarity between them can thus be measured directly. Existing works differ in what embedding space is used and how to project the visual data into the semantic embedding space. Yet, they all measure the similarity in the space using a conventional distance metric (e.g. cosine) that does not consider the rich intrinsic structure, i.e. semantic manifold, of the semantic categories in the embedding space. In this paper we propose to model the semantic manifold in an embedding space using a semantic class label graph. The semantic manifold structure is used to redefine the distance metric in the semantic embedding space for more effective ZSL. The proposed semantic manifold distance is computed using a novel absorbing Markov chain process (AMP), which has a very efficient closed-form solution. The proposed new model improves upon and seamlessly unifies various existing ZSL algorithms. Extensive experiments on both the large scale ImageNet dataset and the widely used Animal with Attribute (AwA) dataset show that our model outperforms significantly the state-of-the-arts.",
"Zero-shot methods in language, vision and other domains rely on a cross-space mapping function that projects vectors from the relevant feature space (e.g., visualfeature-based image representations) to a large semantic word space (induced in an unsupervised way from corpus data), where the entities of interest (e.g., objects images depict) are labeled with the words associated to the nearest neighbours of the mapped vectors. Zero-shot cross-space mapping methods hold great promise as a way to scale up annotation tasks well beyond the labels in the training data (e.g., recognizing objects that were never seen in training). However, the current performance of cross-space mapping functions is still quite low, so that the strategy is not yet usable in practical applications. In this paper, we explore some general properties, both theoretical and empirical, of the cross-space mapping function, and we build on them to propose better methods to estimate it. In this way, we attain large improvements over the state of the art, both in cross-linguistic (word translation) and cross-modal (image labeling) zero-shot experiments."
]
} |
1611.08663 | 2520613337 | Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. Re-using the learned mapping to project target videos into an embedding space thus allows novel-classes to be recognised by nearest neighbour inference. However, existing ZSL methods suffer from auxiliary-target domain shift intrinsically induced by assuming the same mapping for the disjoint auxiliary and target classes. This compromises the generalisation accuracy of ZSL recognition on the target data. In this work, we improve the ability of ZSL to generalise across this domain shift in both model- and data-centric ways by formulating a visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes. Specifically: (1) We introduce a multi-task visual-semantic mapping to improve generalisation by constraining the semantic mapping parameters to lie on a low-dimensional manifold, (2) We explore prioritised data augmentation by expanding the pool of auxiliary data with additional instances weighted by relevance to the target domain. The proposed new model is applied to the challenging zero-shot action recognition problem to demonstrate its advantages over existing ZSL models. | Among many ZSL tasks in computer vision, zero-shot action recognition @cite_27 @cite_39 @cite_21 @cite_8 @cite_42 is of particular interest because of the lesser availability of labelled video compared to image data and videos are more difficult to label than static images due to extended temporal duration and more complex ontology. ZSL action recognition is much less studied than still image recognition, and existing video-ZSL methods suffer from the same domain-shift drawbacks highlighted above. | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_42",
"@cite_39",
"@cite_27"
],
"mid": [
"2289324734",
"2209594346",
"2238153681",
"1805946669",
"2064851185"
],
"abstract": [
"Existing action recognition algorithms require a set of positive exemplars to train a classifier for each action. However, the amount of action classes is very large and the users' queries vary dramatically. It is impractical to pre-define all possible action classes beforehand. To address this issue, we propose to perform action recognition with no positive exemplars, which is often known as the zero-shot learning. Current zero-shot learning paradigms usually train a series of attribute classifiers and then recognize the target actions based on the attribute representation. To ensure the maximum coverage of ad-hoc action classes, the attribute-based approaches require large numbers of reliable and accurate attribute classifiers, which are often unavailable in the real world. In this paper, we propose an approach that merely takes an action name as the input to recognize the action of interest without any pre-trained attribute classifiers and positive exemplars. Given an action name, we first build an analogy pool according to an external ontology, and each action in the analogy pool is related to the target action at different levels. The correlation information inferred from the external ontology may be noisy. We then propose an algorithm, namely adaptive multi-model rank-preserving mapping (AMRM), to train a classifier for action recognition, which is able to evaluate the relatedness of each video in the analogy pool adaptively. As multiple mapping models are employed, our algorithm has better capability to bridge the gap between visual features and the semantic information inferred from the ontology. Extensive experiments demonstrate that our method achieves the promising performance for action recognition only using action names, while no attributes and positive exemplars are available.",
"Zero-shot learning (ZSL) can be considered as a special case of transfer learning where the source and target domains have different tasks label spaces and the target domain is unlabelled, providing little guidance for the knowledge transfer. A ZSL method typically assumes that the two domains share a common semantic representation space, where a visual feature vector extracted from an image video can be projected embedded using a projection function. Existing approaches learn the projection function from the source domain and apply it without adaptation to the target domain. They are thus based on naive knowledge transfer and the learned projections are prone to the domain shift problem. In this paper a novel ZSL method is proposed based on unsupervised domain adaptation. Specifically, we formulate a novel regularised sparse coding framework which uses the target domain class labels' projections in the semantic space to regularise the learned target domain projection thus effectively overcoming the projection domain shift problem. Extensive experiments on four object and action recognition benchmark datasets show that the proposed ZSL method significantly outperforms the state-of-the-arts.",
"In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars. In principle, zero-shot learning makes it possible to train an event detection model based on the assumption that events (e.g. birthday party) can be described by multiple mid-level semantic concepts (e.g. \"blowing candle\", \"birthday cake\"). Towards this goal, we first pre-train a bundle of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept w.r.t. the event of interest and pick up the relevant concept classifiers, which are applied on all test videos to get multiple prediction score vectors. While most existing systems combine the predictions of the concept classifiers with fixed weights, we propose to learn the optimal weights of the concept classifiers for each testing video by exploring a set of online available videos with free-form text descriptions of their content. To validate the effectiveness of the proposed approach, we have conducted extensive experiments on the latest TRECVID MEDTest 2014, MEDTest 2013 and CCV dataset. The experimental results confirm the superiority of the proposed approach.",
"The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.",
"In this paper we explore the idea of using high-level semantic concepts, also called attributes, to represent human actions from videos and argue that attributes enable the construction of more descriptive models for human action recognition. We propose a unified framework wherein manually specified attributes are: i) selected in a discriminative fashion so as to account for intra-class variability; ii) coherently integrated with data-driven attributes to make the attribute set more descriptive. Data-driven attributes are automatically inferred from the training data using an information theoretic approach. Our framework is built upon a latent SVM formulation where latent variables capture the degree of importance of each attribute for each action class. We also demonstrate that our attribute-based action representation can be effectively used to design a recognition procedure for classifying novel action classes for which no training samples are available. We test our approach on several publicly available datasets and obtain promising results that quantitatively demonstrate our theoretical claims."
]
} |
1611.08663 | 2520613337 | Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. Re-using the learned mapping to project target videos into an embedding space thus allows novel-classes to be recognised by nearest neighbour inference. However, existing ZSL methods suffer from auxiliary-target domain shift intrinsically induced by assuming the same mapping for the disjoint auxiliary and target classes. This compromises the generalisation accuracy of ZSL recognition on the target data. In this work, we improve the ability of ZSL to generalise across this domain shift in both model- and data-centric ways by formulating a visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes. Specifically: (1) We introduce a multi-task visual-semantic mapping to improve generalisation by constraining the semantic mapping parameters to lie on a low-dimensional manifold, (2) We explore prioritised data augmentation by expanding the pool of auxiliary data with additional instances weighted by relevance to the target domain. The proposed new model is applied to the challenging zero-shot action recognition problem to demonstrate its advantages over existing ZSL models. | Multi-Task Learning (MTL) @cite_26 @cite_1 aims to improve generalisation in a set of supervised learning tasks by modelling and exploiting shared knowledge across the tasks. An early study @cite_15 proposed to model the weight vector for each task @math as a sum of a shared global task @math and task specific parameter vector @math . However, the assumption of a globally shared underlying task is too strong, and risks inducing negative transfer @cite_26 . This motivates the Grouping and Overlapping Multi-Task Learning (GOMTL) @cite_40 framework which instead assumes that each task's weight vector is a task-specific combination of a small set of latent basis tasks. This constrains the parameters of all tasks to lie on a low dimensional manifold. | {
"cite_N": [
"@cite_40",
"@cite_15",
"@cite_26",
"@cite_1"
],
"mid": [
"1942758450",
"2143104527",
"2165698076",
"1703030490"
],
"abstract": [
"In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear combination are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on the assumption that task parameters within a group lie in a low dimensional subspace but allows the tasks in different groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.",
"Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"In this paper, we provide a new neural-network based perspective on multi-task learning (MTL) and multi-domain learning (MDL). By introducing the concept of a semantic descriptor, this framework unifies MDL and MTL as well as encompassing various classic and recent MTL MDL algorithms by interpreting them as different ways of constructing semantic descriptors. Our interpretation provides an alternative pipeline for zero-shot learning (ZSL), where a model for a novel class can be constructed without training data. Moreover, it leads to a new and practically relevant problem setting of zero-shot domain adaptation (ZSDA), which is the analogous to ZSL but for novel domains: A model for an unseen domain can be generated by its semantic descriptor. Experiments across this range of problems demonstrate that our framework outperforms a variety of alternatives."
]
} |
1611.08663 | 2520613337 | Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. Re-using the learned mapping to project target videos into an embedding space thus allows novel-classes to be recognised by nearest neighbour inference. However, existing ZSL methods suffer from auxiliary-target domain shift intrinsically induced by assuming the same mapping for the disjoint auxiliary and target classes. This compromises the generalisation accuracy of ZSL recognition on the target data. In this work, we improve the ability of ZSL to generalise across this domain shift in both model- and data-centric ways by formulating a visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes. Specifically: (1) We introduce a multi-task visual-semantic mapping to improve generalisation by constraining the semantic mapping parameters to lie on a low-dimensional manifold, (2) We explore prioritised data augmentation by expanding the pool of auxiliary data with additional instances weighted by relevance to the target domain. The proposed new model is applied to the challenging zero-shot action recognition problem to demonstrate its advantages over existing ZSL models. | MTL methods have been studied for action recognition @cite_36 @cite_25 @cite_16 @cite_17 . However, all of these studies focus on improving standard supervised action recognition with multi-task sharing. For example, considering each of multiple views @cite_16 @cite_17 , feature modalities @cite_25 , or -- most obviously -- action categories @cite_36 as different tasks. Multi-view multi-feature recognition is orthogonal to our work, while the later ones are concerned with supervised recognition, and cannot be generalised to the ZSL scenario. In contrast, we take a very different approach and treat each dimension of the visual-semantic mapping as a task, in order to leverage MTL to improve auxiliary-target generalisation across the disjoint target categories. Finally, we note that the use of MTL to learn the visual semantic mapping provides a further benefit of a lower-dimensional space in which zero-shot recognition can be better performed due to being more meaningful for NN matching @cite_41 . | {
"cite_N": [
"@cite_36",
"@cite_41",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"2156392723",
"1672197616",
"1985254019",
"1970732218",
"2139191027"
],
"abstract": [
"Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"We explore the effect of dimensionality on the \"nearest neighbor\" problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!",
"Abstract This paper proposes a unified single multi-view human action recognition method via regularized multi-task learning. First, we propose the pyramid partwise bag of words (PPBoW) representation which implicitly encodes both local visual characteristics and human body structure. Furthermore, we formulate the task of single multi-view human action recognition into a part-induced multi-task learning problem penalized by graph structure and sparsity to discover the latent correlation among multiple views and body parts and consequently boost the performances. The experiment shows that this method can significantly improve performance over the standard BoW+SVM method. Moreover, the proposed method can achieve competing performance simply with low dimensional PPBoW representation against the state-of-the-art methods for human action recognition on KTH and MV-TJU, a new multi-view action dataset with RGB, depth and skeleton data prepared by our group.",
"In this paper, we formulate human action recognition as a novel Multi-Task Sparse Learning(MTSL) framework which aims to construct a test sample with multiple features from as few bases as possible. Learning the sparse representation under each feature modality is considered as a single task in MTSL. Since the tasks are generated from multiple features associated with the same visual input, they are not independent but inter-related. We introduce a Beta process(BP) prior to the hierarchical MTSL model, which efficiently learns a compact dictionary and infers the sparse structure shared across all the tasks. The MTSL model enforces the robustness in coefficient estimation compared with performing each task independently. Besides, the sparseness is achieved via the Beta process formulation rather than the computationally expensive L1 norm penalty. In terms of non-informative gamma hyper-priors, the sparsity level is totally decided by the data. Finally, the learning problem is solved by Gibbs sampling inference which estimates the full posterior on the model parameters. Experimental results on the KTH and UCF sports datasets demonstrate the effectiveness of the proposed MTSL approach for action recognition.",
"This paper presents an approach to view-invariant action recognition, where human poses and motions exhibit large variations across different camera viewpoints. When each viewpoint of a given set of action classes is specified as a learning task then multitask learning appears suitable for achieving view invariance in recognition. We extend the standard multitask learning to allow identifying: (1) latent groupings of action views (i.e., tasks), and (2) discriminative action parts, along with joint learning of all tasks. This is because it seems reasonable to expect that certain distinct views are more correlated than some others, and thus identifying correlated views could improve recognition. Also, part-based modeling is expected to improve robustness against self-occlusion when actors are imaged from different views. Results on the benchmark datasets show that we outperform standard multitask learning by 21.9 , and the state-of-the-art alternatives by 4.5-6 ."
]
} |
1611.08663 | 2520613337 | Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. Re-using the learned mapping to project target videos into an embedding space thus allows novel-classes to be recognised by nearest neighbour inference. However, existing ZSL methods suffer from auxiliary-target domain shift intrinsically induced by assuming the same mapping for the disjoint auxiliary and target classes. This compromises the generalisation accuracy of ZSL recognition on the target data. In this work, we improve the ability of ZSL to generalise across this domain shift in both model- and data-centric ways by formulating a visual-semantic mapping with better generalisation properties and a dynamic data re-weighting method to prioritise auxiliary data that are relevant to the target classes. Specifically: (1) We introduce a multi-task visual-semantic mapping to improve generalisation by constraining the semantic mapping parameters to lie on a low-dimensional manifold, (2) We explore prioritised data augmentation by expanding the pool of auxiliary data with additional instances weighted by relevance to the target domain. The proposed new model is applied to the challenging zero-shot action recognition problem to demonstrate its advantages over existing ZSL models. | Domain shift is a widely studied problem in transfer learning @cite_26 , although it is usually induced by sampling bias @cite_28 @cite_10 or sensor change @cite_3 rather than the disjoint categories in ZSL. Importance weighting (IW) @cite_0 @cite_10 has been one of the main adaptation techniques to address this issue. The prior work in this area is designed for the standard domain transfer problem in a supervised learning setting @cite_20 , while we are the first to generalise it to the zero-shot learning scenario. The IW technique we generalise is related to another domain adaptation approach based on discovering a feature mapping to minimise the (MMD) @cite_34 @cite_18 between distributions. However MMD, is less appropriate for us due to focus on feature mapping rather than instance reweighing, and our expectation is that only subsets of auxiliary instances will be relevant to the target rather than the holistic auxiliary set. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_28",
"@cite_3",
"@cite_0",
"@cite_34",
"@cite_10",
"@cite_20"
],
"mid": [
"2064447488",
"2165698076",
"2031342017",
"1722318740",
"2103851188",
"2125865219",
"2811380766",
"2125918842"
],
"abstract": [
"Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.",
"Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.",
"A situation where training and test samples follow different input distributions is called covariate shift. Under covariate shift, standard learning methods such as maximum likelihood estimation are no longer consistent—weighted variants according to the ratio of test and training input densities are consistent. Therefore, accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate training and test input densities separately and then estimate the importance by taking the ratio of the estimated densities. However, this naive approach tends to perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that does not involve density estimation. Our method is equipped with a natural cross validation procedure and hence tuning parameters such as the kernel width can be objectively optimized. Simulations illustrate the usefulness of our approach.",
"We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. The test statistic can be computed in O(m2) time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.",
"We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate corrections based on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Our method works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.",
"The goal of transfer learning is to improve the learning of a new target concept given knowledge of related source concept(s). We introduce the first boosting-based algorithms for transfer learning that apply to regression tasks. First, we describe two existing classification transfer algorithms, ExpBoost and TrAdaBoost, and show how they can be modified for regression. We then introduce extensions of these algorithms that improve performance significantly on controlled experiments in a wide range of test domains."
]
} |
1611.08547 | 2558208260 | In this paper we explore the usage of rule engines in a graphical framework for visualising dynamic access control policies. We use the Drools rule engine to dynamically compute permissions, following the Category-Based Access Control metamodel. | Several other access control models have been studied through the use of graph-based languages. For example, Koch and al. @cite_17 @cite_13 use directed graphs to formalize RBAC. A distinctive feature in this work is the use graph transformation rules to model role management operations. The graphs in @cite_17 @cite_13 and @cite_18 are both typed and labelled. The typing system is similar in both cases but the label structure in @cite_18 is richer so it can express policies where access rights depend on data associated to the entities in the policy. Labels in @cite_17 @cite_13 are simply identifiers used to encode RBAC. | {
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_17"
],
"mid": [
"2048107609",
"73246841",
"1986989788"
],
"abstract": [
"We define a framework for the analysis of access control policies that aims at easing the specification and verification tasks for security administrators. We consider policies in the category-based access control model, which has been shown to subsume many of the most well known access control models (e.g., MAC, DAC, RBAC). Using a graphical representation of category-based policies, we show how answers to usual administrator queries can be automatically computed, and properties of access control policies can be checked. We show applications in the context of emergency situations, where our framework can be used to analyse the interaction between access control and emergency management.",
"Access control policies are often partly static, i.e. no dependence on any run-time information, and partly dynamic. However, they are usually enforced dynamically - even the static parts. We propose a new hybrid approach to policy enforcement using the Category-Based Access Control (CBAC) meta-model. We build on previous work, which established a static system for the enforcement of (static) hierarchical Role-Based Access Control (RBAC) policies. We modify the previous policy language, JPol, to specify static and dynamic categories. We establish an equivalence between static categories and static roles (in RBAC), therefore we are able to use the previous design patterns and static verification algorithm, with some changes, to enforce static categories. For dynamic categories, we propose a new design methodology and generate code in the target program to do the necessary run-time checks.",
"Role-Based Access Control (RBAC) is supported directly or in a closely related form, by a number of products. This article presents a formalization of RBAC using graph transformations that is a graphical specification technique based on a generalization of classical string grammars to nonlinear structures. The proposed formalization provides an intuitive description for the manipulation of graph structures as they occur in information systems access control and a precise specification of static and dynamic consistency conditions on graphs and graph transformations. The formalism captures the RBAC models published in the literature, and also allows a uniform treatment of user roles and administrative roles, and a detailed analysis of the decentralization of administrative roles."
]
} |
1611.08547 | 2558208260 | In this paper we explore the usage of rule engines in a graphical framework for visualising dynamic access control policies. We use the Drools rule engine to dynamically compute permissions, following the Category-Based Access Control metamodel. | The RBAC policies in @cite_17 @cite_13 can be represented by graphs in @cite_18 , since a role is a particular case of a category. However, graphs used in @cite_17 @cite_13 represent also session information, which is not dealt by policy graphs in @cite_18 . Nevertheless, since the notion of session in RBAC is similar to the same notion in CBAC, the representation of sessions provided in @cite_17 @cite_13 could be easily adapted to policy graphs representing CBAC policies. | {
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_17"
],
"mid": [
"2048107609",
"73246841",
"1986989788"
],
"abstract": [
"We define a framework for the analysis of access control policies that aims at easing the specification and verification tasks for security administrators. We consider policies in the category-based access control model, which has been shown to subsume many of the most well known access control models (e.g., MAC, DAC, RBAC). Using a graphical representation of category-based policies, we show how answers to usual administrator queries can be automatically computed, and properties of access control policies can be checked. We show applications in the context of emergency situations, where our framework can be used to analyse the interaction between access control and emergency management.",
"Access control policies are often partly static, i.e. no dependence on any run-time information, and partly dynamic. However, they are usually enforced dynamically - even the static parts. We propose a new hybrid approach to policy enforcement using the Category-Based Access Control (CBAC) meta-model. We build on previous work, which established a static system for the enforcement of (static) hierarchical Role-Based Access Control (RBAC) policies. We modify the previous policy language, JPol, to specify static and dynamic categories. We establish an equivalence between static categories and static roles (in RBAC), therefore we are able to use the previous design patterns and static verification algorithm, with some changes, to enforce static categories. For dynamic categories, we propose a new design methodology and generate code in the target program to do the necessary run-time checks.",
"Role-Based Access Control (RBAC) is supported directly or in a closely related form, by a number of products. This article presents a formalization of RBAC using graph transformations that is a graphical specification technique based on a generalization of classical string grammars to nonlinear structures. The proposed formalization provides an intuitive description for the manipulation of graph structures as they occur in information systems access control and a precise specification of static and dynamic consistency conditions on graphs and graph transformations. The formalism captures the RBAC models published in the literature, and also allows a uniform treatment of user roles and administrative roles, and a detailed analysis of the decentralization of administrative roles."
]
} |
1611.08547 | 2558208260 | In this paper we explore the usage of rule engines in a graphical framework for visualising dynamic access control policies. We use the Drools rule engine to dynamically compute permissions, following the Category-Based Access Control metamodel. | Several extensions of RBAC, which deal with dynamic permissions, have been proposed. These models allow permissions to change according to internal or external conditions such as time, location, or context-based properties (see, for example, @cite_5 @cite_20 @cite_9 ). All these extensions can be modeled as instances of CBAC. | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_20"
],
"mid": [
"1542283487",
"2115403742",
"2141696417"
],
"abstract": [
"Recent growth in location-based mobile services has introduced a significant need for location and time-based access control to resources. High mobility of the users and services in the emerging mobile applications in particular make the issue of controlling who can access what information and resources from which locations a daunting challenge. Several RBAC based models have been proposed that attempt to capture the location based and or time-based access control requirements in various applications. However, they have limited flexibility and granularity. In this paper, we propose a Location and Time-based RBAC (LoT-RBAC) model to address the access control requirements of highly mobile, dynamic environments to provide both location and time based control.",
"In this paper we present a context-aware RBAC (CARBAC) model for pervasive computing applications. The design of this model has been guided by the context-based access control requirements of such applications. These requirements are related to users' memberships in roles, permission executions by role members, and context-based dynamic integration of services in the environment with an application. Context information is used in role admission policies, in policies related to permission executions by role members, and in policies related to accessing of dynamically interfaced services by role members. The dynamic nature of context information requires model-level support for revocations of role memberships and permission activations when certain context conditions fail to hold. Based on this model we present a programming framework for building context-aware applications, providing mechanisms for specifying and enforcing context-based access control requirements.",
"Role-based access control (RBAC) models have generated a great interest in the security community as a powerful and generalized approach to security management. In many practical scenarios, users may be restricted to assume roles only at predefined time periods. Furthermore, roles may only be invoked on prespecified intervals of time depending upon when certain actions are permitted. To capture such dynamic aspects of a role, a temporal RBAC (TRBAC) model has been recently proposed. However, the TRBAC model addresses the role enabling constraints only. In This work, we propose a generalized temporal role-based access control (GTRBAC) model capable of expressing a wider range of temporal constraints. In particular, the model allows expressing periodic as well as duration constraints on roles, user-role assignments, and role-permission assignments. In an interval, activation of a role can further be restricted as a result of numerous activation constraints including cardinality constraints and maximum active duration constraints. The GTRBAC model extends the syntactic structure of the TRBAC model and its event and trigger expressions subsume those of TRBAC. Furthermore, GTRBAC allows expressing role hierarchies and separation of duty (SoD) constraints for specifying fine-grained temporal semantics."
]
} |
1611.08687 | 2949978197 | Alice wants to join a new social network, and influence its members to adopt a new product or idea. Each person @math in the network has a certain threshold @math for activation , i.e adoption of the product or idea. If @math has at least @math activated neighbors, then @math will also become activated. If Alice wants to activate the entire social network, whom should she befriend? More generally, we study the problem of finding the minimum number of links that a set of external influencers should form to people in the network, in order to activate the entire social network. This Minimum Links Problem has applications in viral marketing and the study of epidemics. Its solution can be quite different from the related and widely studied Target Set Selection problem. We prove that the Minimum Links problem cannot be approximated to within a ratio of @math , for any fixed @math , unless @math , where @math is the number of nodes in the network. On the positive side, we give linear time algorithms to solve the problem for trees, cycles, and cliques, for any given set of external influencers, and give precise bounds on the number of links needed. For general graphs, we design a polynomial time algorithm to compute size-efficient link sets that can activate the entire graph. | Maximizing the number of nodes activated within a specified number of rounds has also been studied @cite_18 @cite_32 . The problem of dynamos or dynamic monopolies in graphs (eg. @cite_0 ) is essentially the target set problem restricted to the case when every node's threshold is half its degree. The recent monograph @cite_37 contains an excellent overview of the area. | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_32",
"@cite_37"
],
"mid": [
"2332241897",
"2029316629",
"",
"2293971114"
],
"abstract": [
"This paper provides an overview of recent developments concerning the process of local majority voting in graphs, and its basic properties, from graph theoretic and algorithmic standpoints.",
"Online social networks (OSNs) have become one of the most effective channels for marketing and advertising. Since users are often influenced by their friends, “word-of-mouth” exchanges, so-called viral marketing, in social networks can be used to increase product adoption or widely spread content over the network. The common perception of viral marketing about being cheap, easy, and massively effective makes it an ideal replacement of traditional advertising. However, recent studies have revealed that the propagation often fades quickly within only few hops from the sources, counteracting the assumption on the self-perpetuating of influence considered in literature. With only limited influence propagation, is massively reaching customers via viral marketing still affordable? How do we economically spend more resources to increase the spreading speed? We investigate the cost-effective massive viral marketing problem, taking into the consideration the limited influence propagation. Both analytical analysis based on power-law network theory and numerical analysis demonstrate that the viral marketing might involve costly seeding. To minimize the seeding cost, we provide mathematical programming to find optimal seeding for medium-size networks and propose VirAds, an efficient algorithm, to tackle the problem on large-scale networks. VirAds guarantees a relative error bound of @math from the optimal solutions in power-law networks and outperforms the greedy heuristics that realizes on the degree centrality. Moreover, we also show that, in general, approximating the optimal seeding within a ratio better than @math is unlikely possible.",
"",
"Given a network represented by a graph @math G=V,E, we consider a dynamical process of influence diffusion in G that evolves as follows: Initially only the nodes of a given @math S⊆V are influenced; subsequently, at each round, the set of influenced nodes is augmented by all the nodes in the network that have a sufficiently large number of already influenced neighbors. The question is to determine a small subset of nodes Sa target set that can influence the whole network. This is a widely studied problem that abstracts many phenomena in the social, economic, biological, and physical sciences. It is known [6] that the above optimization problem is hard to approximate within a factor of @math 2log1-∈|V|, for any @math ∈>0. In this paper, we present a fast and surprisingly simple algorithm that exhibits the following features: 1 when applied to trees, cycles, or complete graphs, it always produces an optimal solution i.e., a minimum size target set; 2 when applied to arbitrary networks, it always produces a solution of cardinality matching the upper bound given in [1], and proved therein by means of the probabilistic method; 3 when applied to real-life networks, it always produces solutions that substantially outperform the ones obtained by previously published algorithms for which no proof of optimality or performance guarantee is known in any class of graphs."
]
} |
1611.08830 | 2548423403 | Requirements are often divided into functional requirements (FRs) and quality requirements (QRs). However, we still have little knowledge about to which extent this distinction makes sense from a practical perspective. In this paper, we report on a survey we conducted with 103 practitioners to explore whether and, if so, why they handle requirements labeled as FRs differently from those labeled as QRs. We additionally asked for consequences of this distinction w.r.t. the development process. Our results indicate that the development process for requirements of the two classes strongly differs (e.g., in testing). We identified a number of reasons why practitioners do (or do not) distinguish between QRs and FRs in their documentation and we analyzed both problems and benefits that arise from that. We found, for instance, that many reasons are based on expectations rather than on evidence. Those expectations are, in fact, not reflected in specific negative or positive consequences per se. It therefore seems more important that the decision whether to make an explicit distinction or not should be made consciously such that people are also aware of the risks that this distinction bears so that they may take appropriate countermeasures. | Chung and Nixon @cite_14 investigate how practitioners handle QRs. They argue that QRs are often retrofitted in the development process or pursued in parallel with, but separately from, functional design and that an ad hoc development process often makes it hard to detect defects early. They perform three experimental studies on how well a given framework @cite_11 can be used to systematically deal with QRs. Svensson et al. @cite_7 perform an interview study on how QRs are used in practice. Based on their interviews, they found that there is no QR-specific elicitation, documentation, and analysis, that QRs are often not quantified and, thus, difficult to test, and that there is only an implicit management of QRs with little or no consequence analysis. Furthermore, they found that at the project level, QRs are not taken into consideration during product planning (and are thereby not included as hard requirements in the projects) and they conclude that the realization of QRs is a reactive rather than proactive effort. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_11"
],
"mid": [
"2143259336",
"1864331247",
"2301605390"
],
"abstract": [
"Quality characteristics are vital for the success of software systems. To remedy the problems inherent in ad hoc development, a framework has been developed to deal with non-functional requirements (quality requirements or NFRs). Taking the premise that the quality of a product depends on the quality of the process that leads from high-Ievel NFRs to the product, the framework's objectives are to represent NFR-specific requirements, consider design tradeoffs, relate design decisions to IYFRs, justify the decisions, and assist defect detection. The purpose of this paper is to give an initial evaluation of the extent to which the framework's objectives are met. Three small portions of information systems were studied by the authors using the framework. The framework and empirical studies are evaluated herein, both from the viewpoint of domain experts who have reviewed the framework and studies, and ourselves as framework developers and users. The systems studied have a variety of characteristics, reflecting a variety of real application domains, and the studies deal with three important classes of NFRs for systems, namely, accuracy, security, and performance. The studies provide preliminary support for the usefulness of certain aspects of the framework, while raising some open issues.",
"[Context and motivation] In market-driven software development it is crucial, but challenging, to find the right balance among competing quality requirements (QR). [Problem] In order to identify the unique challenges associated with the selection, trade-off, and management of quality requirements an interview study is performed. [Results] This paper describes how QR are handled in practice. Data is collected through interviews with five product managers and five project leaders from five software companies. [Contribution] The contribution of this study is threefold: Firstly, it includes an examination of the interdependencies among quality requirements perceived as most important by the practitioners. Secondly, it compares the perceptions and priorities of quality requirements by product management and project management respectively. Thirdly, it characterizes the selection and management of quality requirements in down-stream development activities.",
"A comprehensive framework for representing and using nonfunctional requirements during the development process is proposed. The framework consists of five basic components which provide the representation of nonfunctional requirements in terms of interrelated goals. Such goals can be refined through refinement methods and can be evaluated in order to determine the degree to which a set of nonfunctional requirements is supported by a particular design. Evidence for the power of the framework is provided through the study of accuracy and performance requirements for information systems. >"
]
} |
1611.08830 | 2548423403 | Requirements are often divided into functional requirements (FRs) and quality requirements (QRs). However, we still have little knowledge about to which extent this distinction makes sense from a practical perspective. In this paper, we report on a survey we conducted with 103 practitioners to explore whether and, if so, why they handle requirements labeled as FRs differently from those labeled as QRs. We additionally asked for consequences of this distinction w.r.t. the development process. Our results indicate that the development process for requirements of the two classes strongly differs (e.g., in testing). We identified a number of reasons why practitioners do (or do not) distinguish between QRs and FRs in their documentation and we analyzed both problems and benefits that arise from that. We found, for instance, that many reasons are based on expectations rather than on evidence. Those expectations are, in fact, not reflected in specific negative or positive consequences per se. It therefore seems more important that the decision whether to make an explicit distinction or not should be made consciously such that people are also aware of the risks that this distinction bears so that they may take appropriate countermeasures. | @cite_16 analyze via interviews how QRs are handled in two Swedish software development organizations. They found that QRs are difficult to elicit because of a focus on FRs, they are often described vaguely, are often not sufficiently considered and prioritized, and they are sometimes even ignored. Furthermore, they state that most types of QRs are difficult to test properly due to their nature, and when expressed in non-measurable terms, testing is time-consuming or even impossible. Ameller et al. @cite_2 perform an empirical study based on interviews around the question How do software architects deal with QRs in practice? They found that QRs were often not documented, and even when documented, the documentation was not always precise and usually became desynchronized. | {
"cite_N": [
"@cite_16",
"@cite_2"
],
"mid": [
"149948165",
"2051952787"
],
"abstract": [
"Even though non-functional requirements (NFRs) are critical in order to provide software of good quality, the literature of NFRs is relatively sparse. We describe how NFRs are treated in two develo ...",
"Dealing with non-functional requirements (NFRs) has posed a challenge onto software engineers for many years. Over the years, many methods and techniques have been proposed to improve their elicitation, documentation, and validation. Knowing more about the state of the practice on these topics may benefit both practitioners' and researchers' daily work. A few empirical studies have been conducted in the past, but none under the perspective of software architects, in spite of the great influence that NFRs have on daily architects' practices. This paper presents some of the findings of an empirical study based on 13 interviews with software architects. It addresses questions such as: who decides the NFRs, what types of NFRs matter to architects, how are NFRs documented, and how are NFRs validated. The results are contextualized with existing previous work."
]
} |
1611.08841 | 2951942526 | Boundary estimation in images and videos has been a very active topic of research, and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on estimating boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and corresponding motion patterns -- including a notion of "intuitive physics". We experiment on natural video sequences along with synthetic sequences with deterministic physics-based and agent-based motions. While not being our primary goal, we also show that fusion of RGB and boundary prediction leads to improved RGB predictions. | RGB frame prediction. This problem has recently received a lot of attention. However, the predicted frames have blurriness problems. @cite_2 sought to remedy this problem by discretizing the input through k-means atoms and predicting on this vocabulary instead. The work of @cite_13 proposes using adversarial loss, which leads to improved results over @cite_2 . @cite_20 @cite_12 @cite_25 shows some further improvement through the use of optical flow information. However, these approaches produce sharper short term predictions but still suffer from blurriness problems starting as soon as 3 frames into the future. @cite_23 focus on moving MNIST digits and like @cite_30 on action conditioned video prediction. @cite_18 proposes a hierarchical approach for making long-term frame predictions, by first estimating the high-level structure in the input frames and predicting how that structure evolves in the future. They show promising results on videos where pose is an easily identifiable and appropriate high level structure to exploit. However, such high-level structures are video domain dependent. Other works @cite_4 @cite_8 focus on deterministic bouncing ball sequences, but their dataset is limited in size and resolution and generalization with respect to the number of balls and their velocities is not considered. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_23",
"@cite_2",
"@cite_20",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"2400532028",
"2963253230",
"2135341757",
"2138960858",
"2963629403",
"1568514080",
"",
"2963125871",
"2175030374",
""
],
"abstract": [
"A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.",
"We propose a hierarchical approach for making long-term predictions of future frames. To avoid inherent compounding errors in recursive pixel-level prediction, we propose to first estimate high-level structure in the input frames, then predict how that structure evolves in the future, and finally by observing a single frame from the past and the predicted high-level structure, we construct the future frames without having to observe any of the pixel-level predictions. Long-term video prediction is difficult to perform by recurrently observing the predicted frames because the small errors in pixel space exponentially amplify as predictions are made deeper into the future. Our approach prevents pixel-level error propagation from happening by removing the need to observe the predicted frames. Our model is built with a combination of LSTM and analogy-based encoder-decoder convolutional neural networks, which independently predict the video structure and generate the future frames, respectively. In experiments, our model is evaluated on the Human 3.6M and Penn Action datasets on the task of long-term pixel-level video prediction of humans performing actions and demonstrate significantly better results than the state-of-the-art.",
"The Temporal Restricted Boltzmann Machine (TRBM) is a probabilistic model for sequences that is able to successfully model (i.e., generate nice-looking samples of) several very high dimensional sequences, such as motion capture data and the pixels of low resolution videos of balls bouncing in a box. The major disadvantage of the TRBM is that exact inference is extremely hard, since even computing a Gibbs update for a single variable of the posterior is exponentially expensive. This difficulty has necessitated the use of a heuristic inference procedure, that nonetheless was accurate enough for successful learning. In this paper we introduce the Recurrent TRBM, which is a very slight modification of the TRBM for which exact inference is very easy and exact gradient learning is almost tractable. We demonstrate that the RTRBM is better than an analogous TRBM at generating motion capture and videos of bouncing balls.",
"We propose modeling time series by representing the transformations that take a frame at time t to a frame at time t+1. To this end we show how a bi-linear model of transformations, such as a gated autoencoder, can be turned into a recurrent network, by training it to predict future frames from the current one and the inferred transformation using backprop-through-time. We also show how stacking multiple layers of gating units in a recurrent pyramid makes it possible to represent the \"syntax\" of complicated time series, and that it can outperform standard recurrent neural networks in terms of prediction accuracy on a variety of tasks.",
"",
"We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.",
"",
"Abstract: Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset",
"We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow.",
""
]
} |
1611.08841 | 2951942526 | Boundary estimation in images and videos has been a very active topic of research, and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on estimating boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and corresponding motion patterns -- including a notion of "intuitive physics". We experiment on natural video sequences along with synthetic sequences with deterministic physics-based and agent-based motions. While not being our primary goal, we also show that fusion of RGB and boundary prediction leads to improved RGB predictions. | Intuitive physics. Developing an intuitive understanding of physics from raw visual input has been explored recently. @cite_15 predict future states of balls moving on a billiard table and @cite_32 @cite_31 predict the stability of towers made out of blocks. However, both @cite_15 and @cite_32 have an object notion'', meaning that the architecture knows a priori the location or type of the objects that it is supposed to infer. Although some recent approaches such as @cite_10 @cite_14 are capable of long-term predictions, they are modeling either state-to-state or images-to-state transitions. Moreover, in the latter case, the input is visually simplified, and the focus is only on deterministic motions. In contrast to this body of work, we focus on more diverse scenarios and are agnostic to the underlying objects and causes of change. | {
"cite_N": [
"@cite_31",
"@cite_14",
"@cite_32",
"@cite_15",
"@cite_10"
],
"mid": [
"2586029550",
"2622672190",
"2293598046",
"2271155703",
"2952915411"
],
"abstract": [
"Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way — bypassing the need for an explicit simulation at run-time. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. We first evaluate the approach on synthetic data and compared the results to human judgments on the same stimuli. Further, we extend this approach to reason about future states of such towers that in return enables successful stacking.",
"From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.",
"Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world. In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics. Using a 3D game engine, we create small towers of wooden blocks whose stability is randomized and render them collapsing (or remaining upright). This data allows us to train large convolutional network models which can accurately predict the outcome, as well as estimating the block trajectories. The models are also able to generalize in two important ways: (i) to new physical scenarios, e.g. towers with an additional block and (ii) to images of real wooden blocks, where it obtains a performance comparable to human subjects.",
"The ability to plan and execute goal specific actions in varied, unexpected settings is a central requirement of intelligent agents. In this paper, we explore how an agent can be equipped with an internal model of the dynamics of the external world, and how it can use this model to plan novel actions by running multiple internal simulations (\"visual imagination\"). Our models directly process raw visual input, and use a novel object-centric prediction formulation based on visual glimpses centered on objects (fixations) to enforce translational invariance of the learned physical laws. The agent gathers training data through random interaction with a collection of different environments, and the resulting model can then be used to plan goal-directed actions in novel environments that the agent has not seen before. We demonstrate that our agent can accurately plan actions for playing a simulated billiards game, which requires pushing a ball into a target position or into collision with another ball.",
"Reasoning about objects, relations, and physics is central to human intelligence, and a key goal of artificial intelligence. Here we introduce the interaction network, a model which can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system. Our model takes graphs as input, performs object- and relation-centric reasoning in a way that is analogous to a simulation, and is implemented using deep neural networks. We evaluate its ability to reason about several challenging physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. Our results show it can be trained to accurately simulate the physical trajectories of dozens of objects over thousands of time steps, estimate abstract quantities such as energy, and generalize automatically to systems with different numbers and configurations of objects and relations. Our interaction network implementation is the first general-purpose, learnable physics engine, and a powerful general framework for reasoning about object and relations in a wide variety of complex real-world domains."
]
} |
1611.08841 | 2951942526 | Boundary estimation in images and videos has been a very active topic of research, and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on estimating boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and corresponding motion patterns -- including a notion of "intuitive physics". We experiment on natural video sequences along with synthetic sequences with deterministic physics-based and agent-based motions. While not being our primary goal, we also show that fusion of RGB and boundary prediction leads to improved RGB predictions. | Video segmentation. Video segmentation as the task of finding consistent spatio-temporal boundaries in a video volume has received significant attention over the last years @cite_6 @cite_7 @cite_29 @cite_1 , as it provides an initial analysis and abstraction for further processing. In contrast, our approach aims at predicting these boundaries into the future without any video observed for future frames. | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_7",
"@cite_6"
],
"mid": [
"2126164636",
"2096979710",
"2076756823",
""
],
"abstract": [
"Video segmentation research is currently limited by the lack of a benchmark dataset that covers the large variety of sub problems appearing in video segmentation and that is large enough to avoid over fitting. Consequently, there is little analysis of video segmentation which generalizes across subtasks, and it is not yet clear which and how video segmentation should leverage the information from the still-frames, as previously studied in image segmentation, alongside video specific information, such as temporal volume, motion and occlusion. In this work we provide such an analysis based on annotations of a large video dataset, where each video is manually segmented by multiple persons. Moreover, we introduce a new volume-based metric that includes the important aspect of temporal consistency, that can deal with segmentation hierarchies, and that reflects the tradeoff between over-segmentation and segmentation accuracy.",
"We develop a generative probabilistic model for temporally consistent super pixels in video sequences. In contrast to supermodel methods, object parts in different frames are tracked by the same temporal super pixel. We explicitly model flow between frames with a bilateral Gaussian process and use this information to propagate super pixels in an online fashion. We consider four novel metrics to quantify performance of a temporal super pixel representation and demonstrate superior performance when compared to supermodel methods.",
"Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects.",
""
]
} |
1611.08769 | 2559356359 | Secure signal processing is becoming a de facto model for preserving privacy. We propose a model based on the Fully Homomorphic Encryption (FHE) technique to mitigate security breaches. Our framework provides a method to perform a Fast Fourier Transform (FFT) on a user-specified signal. Using encryption of individual binary values and FHE operations over addition and multiplication, we enable a user to perform the FFT in a fixed point fractional representation in binary. Our approach bounds the error of the implementation to enable user-selectable parameters based on the specific application. We verified our framework against test cases for one dimensional signals and images (two dimensional signals). | Research in secure signal processing has received increasing attention over the past decade. Troncoso-Pastoriza and Perez-Gonzalez @cite_9 examined secure signal processing in the cloud very similar to the concept we use in our work. They focused on privacy issues that occur with cloud computing which is ideally what a Fully Homomorphic Encryption scheme can provide. Wang et. al. @cite_6 also considered privacy issues with secure signal processing. Their focus was on biometrics and protecting authentication and privacy. This paper is similar in keeping confidentiality of private data in a cloud computing environment. | {
"cite_N": [
"@cite_9",
"@cite_6"
],
"mid": [
"2007470995",
"2032001704"
],
"abstract": [
"In recent years, the paradigm of cloud computing has gained an increasing interest from the academic community as well as from the commercial point of view. The cloud is a very appealing concept both for the providers (who can benefit from hiring out their extra computation and storage resources) and for the users (who can avoid the initial investment on resources by outsourcing their processes and data to a cloud).",
"We present a theoretical framework for the analysis of privacy and security trade-offs in secure biometric authentication systems. We use this framework to conduct a comparative information-theoretic analysis of two biometric systems that are based on linear error correction codes, namely fuzzy commitment and secure sketches. We derive upper bounds for the probability of false rejection (PFR) and false acceptance (PFA) for these systems. We use mutual information to quantify the information leaked about a user's biometric identity, in the scenario where one or multiple biometric enrollments of the user are fully or partially compromised. We also quantify the probability of successful attack (PSA) based on the compromised information. Our analysis reveals that fuzzy commitment and secure sketch systems have identical PFR, PFA, PSA, and information leakage, but secure sketch systems have lower storage requirements. We analyze both single-factor (keyless) and two-factor (key-based) variants of secure biometrics, and consider the most general scenarios in which a single user may provide noisy biometric enrollments at several access control devices, some of which may be subsequently compromised by an attacker. Our analysis highlights the revocability and reusability properties of key-based systems and exposes a subtle design trade-off between reducing information leakage from compromised systems and preventing successful attacks on systems whose data have not been compromised."
]
} |
1611.08771 | 2140712360 | The Goodwillie derivatives of the identity functor on pointed spaces form an operad in spectra. Adapting a definition of Behrens, we introduce mod 2 homology operations for algebras over this operad and prove these operations account for all the mod 2 homology of free algebras on suspension spectra of simply-connected spaces. | Since the first appearance of this work as the author's PhD thesis, an analogous computation of the mod @math homology for odd primes @math has been carried out by Kjaer @cite_0 . In that setting the story is less complete: due to a lack of an analogue of Priddy's trace formula [Lemma 1.4.3] Behrens , the relations between the homology operations have yet to be fully determined. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2510637842"
],
"abstract": [
"The derivatives of the identity functor on spaces in Goodwillie calculus forms an operad in spectra. Antolin-Camarena computed the mod 2 homology of free algebras over this operad for 1-connected spectra. In this present paper we carry out similar computations for mod p homology for odd primes p."
]
} |
1611.08771 | 2140712360 | The Goodwillie derivatives of the identity functor on pointed spaces form an operad in spectra. Adapting a definition of Behrens, we introduce mod 2 homology operations for algebras over this operad and prove these operations account for all the mod 2 homology of free algebras on suspension spectra of simply-connected spaces. | In the upcoming paper @cite_5 , Brantner computes the @math -homology of certain free @math -algebras in the category of @math -local spectra and thus computes the @math -homology operations on @math -local @math -algebras. Here @math and @math denote Morava @math -theory and Morava @math -theory respectively. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1541776832"
],
"abstract": [
"CW-complexes generalities on homotopy classes of mappings homotopy groups homotopy theory of CW-complexes homotopy with local coefficients homology of fibre spaces elementary theory the homology suspension Postnikov systems on mappings into group-like spaces homotopy operations stable homotopy and homology homology of fibre spaces compact Lie groups additive relations."
]
} |
1611.08512 | 2951619114 | Most existing person re-identification (ReID) methods rely only on the spatial appearance information from either one or multiple person images, whilst ignore the space-time cues readily available in video or image-sequence data. Moreover, they often assume the availability of exhaustively labelled cross-view pairwise data for every camera pair, making them non-scalable to ReID applications in real-world large scale camera networks. In this work, we introduce a novel video based person ReID method capable of accurately matching people across views from arbitrary unaligned image-sequences without any labelled pairwise data. Specifically, we introduce a new space-time person representation by encoding multiple granularities of spatio-temporal dynamics in form of time series. Moreover, a Time Shift Dynamic Time Warping (TS-DTW) model is derived for performing automatically alignment whilst achieving data selection and matching between inherently inaccurate and incomplete sequences in a unified way. We further extend the TS-DTW model for accommodating multiple feature-sequences of an image-sequence in order to fuse information from different descriptions. Crucially, this model does not require pairwise labelled training data (i.e. unsupervised) therefore readily scalable to large scale camera networks of arbitrary camera pairs without the need for exhaustive data annotation for every camera pair. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two benchmarking ReID datasets, PRID2011 and iLIDS-VID. | Gait recognition. Gait recognition @cite_95 @cite_75 @cite_19 @cite_20 @cite_24 has been extensively exploited for people identification using video space-time features, e.g. correlation based motion feature @cite_4 , and Gait Energy Image (GEI) templates @cite_48 . To improve gait representations, @cite_82 and @cite_58 suggest feature selection and quality measure. These methods assume that image-sequences are aligned and captured in controlled environments with uncluttered background, as well as having complete gait cycles, little occlusion, and accurate gait phase estimation. However, these constraints are often invalid in person ReID context as shown in Figures and . | {
"cite_N": [
"@cite_4",
"@cite_48",
"@cite_24",
"@cite_19",
"@cite_95",
"@cite_58",
"@cite_75",
"@cite_20",
"@cite_82"
],
"mid": [
"",
"2126680226",
"2039896520",
"1984031350",
"2151458682",
"1681415691",
"2118435112",
"1994173314",
"2128169645"
],
"abstract": [
"",
"In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches",
"The paper proposes a two-phase view-invariant multiscale gait recognition method (VI-MGR) which is robust to variation in clothing and presence of a carried item. In phase 1, VI-MGR uses the entropy of the limb region of a gait energy image (GEI) to determine the matching gallery view of the probe using 2-dimensional principal component analysis and Euclidean distance classifier. In phase 2, the probe subject is compared with the matching view of the gallery subjects using multiscale shape analysis. In this phase, VI-MGR applies Gaussian filter to a GEI to generate a multiscale gait image for gradually highlighting the subject?s inner shape characteristics to achieve insensitiveness to boundary shape alterations due to carrying conditions and clothing variation. A weighted random subspace learning based classification is used to exploit the high dimensionality of the feature space for improved identification by avoiding overlearning. Experimental analyses on public datasets demonstrate the efficacy of VI-MGR. HighlightsThe paper proposes a two-phase view-invariant multiscale gait recognition method (VI-MGR).VI-MGR is also robust to clothing variation and presence of a carried item.Phase 1 determines the matching gallery view of the probe using entropy.Phase 2 performs multiscale shape analysis using the Gaussian filter.A subject is classified using weighted random subspace learning to avoid overfitting.",
"HighlightsPresentation of the new freely available TUM Gait from Audio, Image and Depth (GAID) database.Advancing gait based person identification by multimodal feature extraction.Gait based recognition of person traits: gender, age, height, shoe type.Baseline results and fusion for gait recognition using RGB, depth and audio. Recognizing people by the way they walk-also known as gait recognition-has been studied extensively in the recent past. Recent gait recognition methods solely focus on data extracted from an RGB video stream. With this work, we provide a means for multimodal gait recognition, by introducing the freely available TUM Gait from Audio, Image and Depth (GAID) database. This database simultaneously contains RGB video, depth and audio. With 305 people in three variations, it is one of the largest to-date. To further investigate challenges of time variation, a subset of 32 people is recorded a second time. We define standardized experimental setups for both person identification and for the assessment of the soft biometrics age, gender, height, and shoe type. For all defined experiments, we present several baseline results on all available modalities. These effectively demonstrate multimodal fusion being beneficial to gait recognition.",
"Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is \"solvable\" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.",
"Many gait recognition approaches use silhouette data. Imperfections in silhouette extraction have a negative effect on the performance of a gait recognition system. In this paper we extend quality metrics for gait recognition and evaluate new ways of using quality to improve a recognition system. We demonstrate use of quality to improve silhouette data and select gait cycles of best quality. The potential of the new approaches has been demonstrated experimentally on a challenging dataset, showing how recognition capability can be dramatically improved. Our practical study also shows that acquiring samples of adequate quality in arbitrary environments is difficult and that including quality analysis can improve performance markedly.",
"In this paper, we propose a new patch distribution feature (PDF) (i.e., referred to as Gabor-PDF) for human gait recognition. We represent each gait energy image (GEI) as a set of local augmented Gabor features, which concatenate the Gabor features extracted from different scales and different orientations together with the X-Y coordinates. We learn a global Gaussian mixture model (GMM) (i.e., referred to as the universal background model) with the local augmented Gabor features from all the gallery GEIs; then, each gallery or probe GEI is further expressed as the normalized parameters of an image-specific GMM adapted from the global GMM. Observing that one video is naturally represented as a group of GEIs, we also propose a new classification method called locality-constrained group sparse representation (LGSR) to classify each probe video by minimizing the weighted l1, 2 mixed-norm-regularized reconstruction error with respect to the gallery videos. In contrast to the standard group sparse representation method that is a special case of LGSR, the group sparsity and local smooth sparsity constraints are both enforced in LGSR. Our comprehensive experiments on the benchmark USF HumanID database demonstrate the effectiveness of the newly proposed feature Gabor-PDF and the new classification method LGSR for human gait recognition. Moreover, LGSR using the new feature Gabor-PDF achieves the best average Rank-1 and Rank-5 recognition rates on this database among all gait recognition algorithms proposed to date.",
"Highlights? We combine depth and RGB information from Kinect for frontal gait recognition. ? Key poses are extracted using depth frames registered in RGB frame coordinate system. ? A new feature named Pose Depth Volume is proposed. ? Comparative study with existing gait features has been done. We explore the applicability of Kinect RGB-D streams in recognizing gait patterns of individuals. Gait energy volume (GEV) is a recently proposed feature that performs gait recognition in frontal view using only depth image frames from Kinect. Since depth frames from Kinect are inherently noisy, corresponding silhouette shapes are inaccurate, often merging with the background. We register the depth and RGB frames from Kinect to obtain smooth silhouette shape along with depth information. A partial volume reconstruction of the frontal surface of each silhouette is done and a novel feature termed as Pose Depth Volume (PDV) is derived from this volumetric model. Recognition performance of the proposed approach has been tested on a data set captured using Microsoft Kinect in an indoor environment. Experimental results clearly demonstrate the effectiveness of the approach in comparison with other existing methods.",
"Gait recognition has recently gained significant attention, especially in vision-based automated human identification at a distance in visual surveillance and monitoring applications. Silhouette-based gait recognition is one of the most popular methods for recognising moving shapes. This paper aims to investigate the important features in silhouette-based gait recognition from point of view of statistical analysis. It is shown that the average silhouette includes a static component of gait (head and body) as the most important image part, while dynamic component of gait (swings of legs and arms) is ignored as the least important information. At the same time ignoring dynamic part of gait can result in loss in recognition rate in some cases, and the importance of better motion estimation is underlined."
]
} |
1611.08512 | 2951619114 | Most existing person re-identification (ReID) methods rely only on the spatial appearance information from either one or multiple person images, whilst ignore the space-time cues readily available in video or image-sequence data. Moreover, they often assume the availability of exhaustively labelled cross-view pairwise data for every camera pair, making them non-scalable to ReID applications in real-world large scale camera networks. In this work, we introduce a novel video based person ReID method capable of accurately matching people across views from arbitrary unaligned image-sequences without any labelled pairwise data. Specifically, we introduce a new space-time person representation by encoding multiple granularities of spatio-temporal dynamics in form of time series. Moreover, a Time Shift Dynamic Time Warping (TS-DTW) model is derived for performing automatically alignment whilst achieving data selection and matching between inherently inaccurate and incomplete sequences in a unified way. We further extend the TS-DTW model for accommodating multiple feature-sequences of an image-sequence in order to fuse information from different descriptions. Crucially, this model does not require pairwise labelled training data (i.e. unsupervised) therefore readily scalable to large scale camera networks of arbitrary camera pairs without the need for exhaustive data annotation for every camera pair. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two benchmarking ReID datasets, PRID2011 and iLIDS-VID. | Main challenges for gait recognition arise from various covariate conditions, e.g. carrying, clothing, walking surface, footwear, and viewpoint. Beyond the attempts of designing and investigating gait features invariable to specific covariates @cite_95 @cite_62 @cite_34 @cite_46 @cite_50 , more powerful learning based methods have also been presented for explicitly and accurately modelling the complex variances of gait structures. For example, Mart ' n-F 'e lez and Xiang @cite_64 exploit the learning-to-rank strategy for jointly characterising a variety of covariate conditions in a unified model. Whilst a learning process may help improve the gait recognition accuracy, this strategy is heavily affected by the goodness of gait features. On person ReID videos however, gait features are likely to be extremely unreliable, as demonstrated in Figure . | {
"cite_N": [
"@cite_62",
"@cite_64",
"@cite_95",
"@cite_50",
"@cite_46",
"@cite_34"
],
"mid": [
"",
"2072510697",
"2151458682",
"2011058239",
"1596155770",
"2021739062"
],
"abstract": [
"",
"Gait is a useful biometric because it can operate from a distance and without subject cooperation. However, it is affected by changes in covariate conditions (carrying, clothing, view angle, etc.). Existing methods suffer from lack of training samples, can only cope with changes in a subset of conditions with limited success, and implicitly assume subject cooperation. We propose a novel approach which casts gait recognition as a bipartite ranking problem and leverages training samples from different people and even from different datasets. By exploiting learning to rank, the problem of model over-fitting caused by under-sampled training data is effectively addressed. This makes our approach suitable under a genuine uncooperative setting and robust against changes in any covariate conditions. Extensive experiments demonstrate that our approach drastically outperforms existing methods, achieving up to 14-fold increase in recognition rate under the most difficult uncooperative settings.",
"Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is \"solvable\" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.",
"The strength of gait, compared to other biometrics, is that it does not require cooperative subjects. In previous work gait recognition approaches were evaluated using a gallery set consisting of gait sequences of people under similar covariate conditions (e.g. clothing, surface, carrying, and view conditions). This evaluation procedure, however, implies that the gait data are collected in a cooperative manner so that the covariate conditions are known a priori. In this work, gait recognition approaches are evaluated without the assumption on cooperative subjects, i.e. both the gallery and the probe sets consist of a mixture of gait sequences under different and unknown covariate conditions. The results indicate that the performance of the existing approaches would drop drastically under this more realistic experimental setup. We argue that selecting the most relevant gait features that are invariant to changes in gait covariate conditions is the key to develop a gait recognition system that works without subject cooperation. To this end, Gait Entropy Image (GEnI) is proposed to perform automatic feature selection on each pair of gallery and probe gait sequences. Moreover, an Adaptive Component and Discriminant Analysis (ACDA) is formulated which seamlessly integrates our feature selection method with subspace analysis for robust recognition, and importantly is computationally much more efficient compared to the conventional Component and Discriminant Analysis. Experiments are carried out on two comprehensive benchmarking databases: the CASIA database and the Southampton Human ID at a distance gait database (SOTON database). Our results demonstrate that the proposed approach significantly outperforms the existing techniques particularly when gait is captured with variable and unknown covariate conditions.",
"Compact spatio temporal representation of human gait in form of gait enery image (GEI) has attracted lot of attention in recent years for biometric gait recognition. Researchers have reported very high recognition rates for normal walk sequences. However, the rates come down when the subjects are wearing a jacket or coat, or are carrying a bag. This paper shows that the performance for the variant situations can be improved upon considerably by constructing the GEI with sway alignment instead of upper body alignment, and selecting just the required number of rows from the bottom of the silhouette as inputs for an unsupervised feature selection approach. The improvement in recognition rates are established by comparing performances with existing results on a large gait database.",
"Gait Energy Image (GEI) has been proved to be an effective identity signature in gait recognition. But previous approaches only treat this 2D image representation as a holistic feature and neglect the intrinsic dynamic characteristics of gait patterns. In this paper, we use variation analysis to obtain the dynamic region in GEI which reflects the walking manner of an individual. Based on this analysis, a dynamics weight mask is constructed to enhance the dynamic region and suppress the noises on the unimportant regions. The obtained gait representation called enhanced GEI (EGEI) is then represented in low dimensional subspace by Gabor-based discriminative common vectors analysis. We test the proposed approach on the USF HumanID Gait Database. Experimental results prove its effectiveness in terms of recognition rate."
]
} |
1611.08512 | 2951619114 | Most existing person re-identification (ReID) methods rely only on the spatial appearance information from either one or multiple person images, whilst ignore the space-time cues readily available in video or image-sequence data. Moreover, they often assume the availability of exhaustively labelled cross-view pairwise data for every camera pair, making them non-scalable to ReID applications in real-world large scale camera networks. In this work, we introduce a novel video based person ReID method capable of accurately matching people across views from arbitrary unaligned image-sequences without any labelled pairwise data. Specifically, we introduce a new space-time person representation by encoding multiple granularities of spatio-temporal dynamics in form of time series. Moreover, a Time Shift Dynamic Time Warping (TS-DTW) model is derived for performing automatically alignment whilst achieving data selection and matching between inherently inaccurate and incomplete sequences in a unified way. We further extend the TS-DTW model for accommodating multiple feature-sequences of an image-sequence in order to fuse information from different descriptions. Crucially, this model does not require pairwise labelled training data (i.e. unsupervised) therefore readily scalable to large scale camera networks of arbitrary camera pairs without the need for exhaustive data annotation for every camera pair. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two benchmarking ReID datasets, PRID2011 and iLIDS-VID. | Temporal sequence matching. Temporal sequence matching is another alternative strategy. The Dynamic Time Warping (DTW) model @cite_7 @cite_47 @cite_85 and its variants including derivative DTW @cite_28 @cite_35 , weighted DTW @cite_92 , are common sequence matching algorithms widely used in data mining and pattern recognition. Given two temporal sequences, it searches for the optimal non-linear warp path between the sequences that minimises the matching distance. However, the conventional DTW models assume that the two sequences have the same number of temporal cycles (phases) and are aligned at the starting and ending points elements. These conditions are difficult to be met in person videos from typical surveillance scenes. Hence, directly using DTW variants to holistically match these unregulated videos may be suboptimal. To further compound the problem, there are often unknown occlusions and background clutters that can lead to corrupted video frames with missing and or noisy observation thus potentially inaccurate distance measurement. | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_28",
"@cite_92",
"@cite_85",
"@cite_47"
],
"mid": [
"2088319836",
"1560013842",
"",
"2008348094",
"2099302229",
""
],
"abstract": [
"Similarity search and detection is a central problem in time series data processing and management. Most approaches to this problem have been developed around the notion of dynamic time warping, whereas several dimensionality reduction techniques have been proposed to improve the efficiency of similarity searches. Due to the continuous increasing of sources of time series data and the cruciality of real-world applications that use such data, we believe there is a challenging demand for supporting similarity detection in time series in a both accurate and fast way. Our proposal is to define a concise yet feature-rich representation of time series, on which the dynamic time warping can be applied for effective and efficient similarity detection of time series. We present the Derivative time series Segment Approximation (DSA) representation model, which originally features derivative estimation, segmentation and segment approximation to provide both high sensitivity in capturing the main trends of time series and data compression. We extensively compare DSA with state-of-the-art similarity methods and dimensionality reduction techniques in clustering and classification frameworks. Experimental evidence from effectiveness and efficiency tests on various datasets shows that DSA is well-suited to support both accurate and fast similarity detection.",
"1. Fundamentals of Speech Recognition. 2. The Speech Signal: Production, Perception, and Acoustic-Phonetic Characterization. 3. Signal Processing and Analysis Methods for Speech Recognition. 4. Pattern Comparison Techniques. 5. Speech Recognition System Design and Implementation Issues. 6. Theory and Implementation of Hidden Markov Models. 7. Speech Recognition Based on Connected Word Models. 8. Large Vocabulary Continuous Speech Recognition. 9. Task-Oriented Applications of Automatic Speech Recognition.",
"",
"Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penalty-based DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems.",
"Most time series data mining algorithms use similarity search as a core subroutine, and thus the time taken for similarity search is the bottleneck for virtually all time series data mining algorithms. The difficulty of scaling search to large datasets largely explains why most academic work on time series data mining has plateaued at considering a few millions of time series objects, while much of industry and science sits on billions of time series objects waiting to be explored. In this work we show that by using a combination of four novel ideas we can search and mine truly massive time series for the first time. We demonstrate the following extremely unintuitive fact; in large datasets we can exactly search under DTW much more quickly than the current state-of-the-art Euclidean distance search algorithms. We demonstrate our work on the largest set of time series experiments ever attempted. In particular, the largest dataset we consider is larger than the combined size of all of the time series datasets considered in all data mining papers ever published. We show that our ideas allow us to solve higher-level time series data mining problem such as motif discovery and clustering at scales that would otherwise be untenable. In addition to mining massive datasets, we will show that our ideas also have implications for real-time monitoring of data streams, allowing us to handle much faster arrival rates and or use cheaper and lower powered devices than are currently possible.",
""
]
} |
1611.08512 | 2951619114 | Most existing person re-identification (ReID) methods rely only on the spatial appearance information from either one or multiple person images, whilst ignore the space-time cues readily available in video or image-sequence data. Moreover, they often assume the availability of exhaustively labelled cross-view pairwise data for every camera pair, making them non-scalable to ReID applications in real-world large scale camera networks. In this work, we introduce a novel video based person ReID method capable of accurately matching people across views from arbitrary unaligned image-sequences without any labelled pairwise data. Specifically, we introduce a new space-time person representation by encoding multiple granularities of spatio-temporal dynamics in form of time series. Moreover, a Time Shift Dynamic Time Warping (TS-DTW) model is derived for performing automatically alignment whilst achieving data selection and matching between inherently inaccurate and incomplete sequences in a unified way. We further extend the TS-DTW model for accommodating multiple feature-sequences of an image-sequence in order to fuse information from different descriptions. Crucially, this model does not require pairwise labelled training data (i.e. unsupervised) therefore readily scalable to large scale camera networks of arbitrary camera pairs without the need for exhaustive data annotation for every camera pair. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two benchmarking ReID datasets, PRID2011 and iLIDS-VID. | Space-time visual features. Our person video representation is inspired by existing successful action features and the DVR model @cite_53 , e.g. histograms of oriented 3D spatio-temporal gradient (HOG3D) @cite_15 . In contrast to most feature vector based action representations @cite_91 @cite_5 @cite_72 @cite_67 @cite_2 @cite_17 @cite_8 @cite_33 , we represent person videos with temporal sequences based representations. This design is capable of (1) not only encoding the dynamic temporal structures of motion, (2) but also selectively matching unregulated person videos (see Section ). While some action recognition models also regard videos as sequences of observation @cite_3 @cite_83 @cite_98 @cite_22 @cite_14 , their focus is coarse temporal structure modelling alone. | {
"cite_N": [
"@cite_67",
"@cite_14",
"@cite_91",
"@cite_33",
"@cite_8",
"@cite_98",
"@cite_22",
"@cite_53",
"@cite_3",
"@cite_72",
"@cite_83",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2122319321",
"2119860852",
"2034328688",
"2075230492",
"",
"1498368596",
"",
"2256680489",
"",
"2142194269",
"2136853139",
"2108333036",
"2533739470",
"2024868105",
""
],
"abstract": [
"This paper addresses a spatiotemporal pattern recognition problem. The main purpose of this study is to find a right representation and matching of action video volumes for categorization. A novel method is proposed to measure video-to-video volume similarity by extending Canonical Correlation Analysis (CCA), a principled tool to inspect linear relations between two sets of vectors, to that of two multiway data arrays (or tensors). The proposed method analyzes video volumes as inputs avoiding the difficult problem of explicit motion estimation required in traditional methods and provides a way of spatiotemporal pattern matching that is robust to intraclass variations of actions. The proposed matching is demonstrated for action classification by a simple Nearest Neighbor classifier. We, moreover, propose an automatic action detection method, which performs 3D window search over an input video with action exemplars. The search is speeded up by dynamic learning of subspaces in the proposed CCA. Experiments on a public action data set (KTH) and a self-recorded hand gesture data showed that the proposed method is significantly better than various state-of-the-art methods with respect to accuracy. Our method has low time complexity and does not require any major tuning parameters.",
"We address the problem of action recognition by describing actions as time series of frames and introduce a new kernel to compare their dynamical aspects. Action recognition in realistic videos has been successfully addressed using kernel methods like SVMs. Most existing approaches average local features over video volumes and compare the resulting vectors using kernels on bags of features. In contrast, we model actions as time series of per-frame representations and propose a kernel specifically tailored for the purpose of action recognition. Our main contributions are the following: (i) we provide a new principled way to compare the dynamics and temporal structure of actions by computing the distance between their auto-correlations, (ii) we derive a practical formulation to compute this distance in any feature space deriving from a base kernel between frames and (iii) we report experimental results on recent action recognition datasets showing that it provides useful complementary information to the average distribution of frames, as used in state-of-the-art models based on bag-of-features.",
"Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper, we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition.",
"Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object’s “true” 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an @math inspired objective for trajectory reconstruction that is able to “adaptively” select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.",
"",
"Much recent research in human activity recognition has focused on the problem of recognizing simple repetitive (walking, running, waving) and punctual actions (sitting up, opening a door, hugging). However, many interesting human activities are characterized by a complex temporal composition of simple actions. Automatic recognition of such complex actions can benefit from a good understanding of the temporal structures. We present in this paper a framework for modeling motion by exploiting the temporal structure of the human activities. In our framework, we represent activities as temporal compositions of motion segments. We train a discriminative model that encodes a temporal decomposition of video sequences, and appearance models for each motion segment. In recognition, a query video is matched to the model according to the learned appearances and motion segment decomposition. Classification is made based on the quality of matching between the motion segment classifiers and the temporal segments in the query sequence. To validate our approach, we introduce a new dataset of complex Olympic Sports activities. We show that our algorithm performs better than other state of the art methods.",
"",
"Current person re-identification (ReID) methods typically rely on single-frame imagery features, whilst ignoring space-time information from image sequences often available in the practical surveillance scenarios. Single-frame (single-shot) based visual appearance matching is inherently limited for person ReID in public spaces due to the challenging visual ambiguity and uncertainty arising from non-overlapping camera views where viewing condition changes can cause significant people appearance variations. In this work, we present a novel model to automatically select the most discriminative video fragments from noisy incomplete image sequences of people from which reliable space-time and appearance features can be computed, whilst simultaneously learning a video ranking function for person ReID. Using the PRID @math , iLIDS-VID, and HDA+ image sequence datasets, we extensively conducted comparative evaluations to demonstrate the advantages of the proposed model over contemporary gait recognition, holistic image sequence matching and state-of-the-art single- multi-shot ReID methods.",
"",
"The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.",
"Visual recognition of human actions in video clips has been an active field of research in recent years. However, most published methods either analyse an entire video and assign it a single action label, or use relatively large look-ahead to classify each frame. Contrary to these strategies, human vision proves that simple actions can be recognised almost instantaneously. In this paper, we present a system for action recognition from very short sequences (ldquosnippetsrdquo) of 1-10 frames, and systematically evaluate it on standard data sets. It turns out that even local shape and optic flow for a single frame are enough to achieve ap90 correct recognitions, and snippets of 5-7 frames (0.3-0.5 seconds of video) are enough to achieve a performance similar to the one obtainable with the entire video sequence.",
"In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data.",
"A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.",
""
]
} |
1611.08583 | 2558232109 | Today's autonomous vehicles rely extensively on high-definition 3D maps to navigate the environment. While this approach works well when these maps are completely up-to-date, safe autonomous vehicles must be able to corroborate the map's information via a real time sensor-based system. Our goal in this work is to develop a model for road layout inference given imagery from on-board cameras, without any reliance on high-definition maps. However, no sufficient dataset for training such a model exists. Here, we leverage the availability of standard navigation maps and corresponding street view images to construct an automatically labeled, large-scale dataset for this complex scene understanding problem. By matching road vectors and metadata from navigation maps with Google Street View images, we can assign ground truth road layout attributes (e.g., distance to an intersection, one-way vs. two-way street) to the images. We then train deep convolutional networks to predict these road layout attributes given a single monocular RGB image. Experimental evaluation demonstrates that our model learns to correctly infer the road attributes using only panoramas captured by car-mounted cameras as input. Additionally, our results indicate that this method may be suitable to the novel application of recommending safety improvements to infrastructure (e.g., suggesting an alternative speed limit for a street). | A lidar-based approach to intersection detection was explored in @cite_9 . Here, we focus on RGB imagery from cameras, bypassing the use of lidar which is both much more expensive and of lower resolution. While @cite_9 focuses solely on identifying two types of intersection topology, our method focuses on inference about both intersections and regular road segments. In addition, our work focuses on a set of nine categorical and numerical road layout attributes, several of which have not been previously been studied in the context of computer vision. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2136849599"
],
"abstract": [
"Finding road intersections in advance is crucial for navigation and path planning of moving autonomous vehicles, especially when there is no position or geographic auxiliary information available. In this paper, we investigate the use of a 3D point cloud based solution for intersection and road segment classification in front of an autonomous vehicle. It is based on the analysis of the features from the designed beam model. First, we build a grid map of the point cloud and clear the cells which belong to other vehicles. Then, the proposed beam model is applied with a specified distance in front of autonomous vehicle. A feature set based on the length distribution of the beam is extracted from the current frame and combined with a trained classifier to solve the road-type classification problem, i.e., segment and intersection. In addition, we also make the distinction between +-shaped and T-shaped intersections. The results are reported over a series of real-world data. A performance of above 80 correct classification is reported at a real-time classification rate of 5 Hz."
]
} |
1611.08583 | 2558232109 | Today's autonomous vehicles rely extensively on high-definition 3D maps to navigate the environment. While this approach works well when these maps are completely up-to-date, safe autonomous vehicles must be able to corroborate the map's information via a real time sensor-based system. Our goal in this work is to develop a model for road layout inference given imagery from on-board cameras, without any reliance on high-definition maps. However, no sufficient dataset for training such a model exists. Here, we leverage the availability of standard navigation maps and corresponding street view images to construct an automatically labeled, large-scale dataset for this complex scene understanding problem. By matching road vectors and metadata from navigation maps with Google Street View images, we can assign ground truth road layout attributes (e.g., distance to an intersection, one-way vs. two-way street) to the images. We then train deep convolutional networks to predict these road layout attributes given a single monocular RGB image. Experimental evaluation demonstrates that our model learns to correctly infer the road attributes using only panoramas captured by car-mounted cameras as input. Additionally, our results indicate that this method may be suitable to the novel application of recommending safety improvements to infrastructure (e.g., suggesting an alternative speed limit for a street). | In an early application of deep learning to autonomous driving @cite_8 , the small, off-road, remote-controlled truck DAVE learned to avoid obstacles. A ConvNet was trained to map stereo image frames directly to an ideal steering angle for DAVE. In a similar system also trained in an end-to-end fashion, a group at Nvidia recently demoed a real car with steering controlled by a ConvNet @cite_10 . In @cite_6 , trained a ConvNet to predict several "affordance indicators" (e.g., distance to the preceding car) for driving in a race car simulator. This is similar to our work in that we do not directly compute a driving action, but rather several road layout attributes that can later inform a driving controller. In @cite_0 , trained a ConvNet for car and lane detection on highway driving videos. In contrast to our image dataset, theirs was manually annotated via Amazon Mechanical Turk. While OpenStreetMap is also developed through crowdsourcing, in our system the ground truth label transfer to Google Street View images is fully automated. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_6",
"@cite_8"
],
"mid": [
"1585377561",
"2342840547",
"2953248129",
"2133233905"
],
"abstract": [
"Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.",
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS).",
"Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forward-pointing wireless color cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m s."
]
} |
1611.08583 | 2558232109 | Today's autonomous vehicles rely extensively on high-definition 3D maps to navigate the environment. While this approach works well when these maps are completely up-to-date, safe autonomous vehicles must be able to corroborate the map's information via a real time sensor-based system. Our goal in this work is to develop a model for road layout inference given imagery from on-board cameras, without any reliance on high-definition maps. However, no sufficient dataset for training such a model exists. Here, we leverage the availability of standard navigation maps and corresponding street view images to construct an automatically labeled, large-scale dataset for this complex scene understanding problem. By matching road vectors and metadata from navigation maps with Google Street View images, we can assign ground truth road layout attributes (e.g., distance to an intersection, one-way vs. two-way street) to the images. We then train deep convolutional networks to predict these road layout attributes given a single monocular RGB image. Experimental evaluation demonstrates that our model learns to correctly infer the road attributes using only panoramas captured by car-mounted cameras as input. Additionally, our results indicate that this method may be suitable to the novel application of recommending safety improvements to infrastructure (e.g., suggesting an alternative speed limit for a street). | Recently, researchers have exploited the OSM or GSV datasets for a few localization-related applications. @cite_3 paired visual odometry with OSM to alleviate visual odometry's tendency to drift. In @cite_12 , models estimating road attributes including intersection presence and road type (trained with OSM-extracted labels) were used to speed up self-localization on driving video. In @cite_11 , M ' developed a model to segment OSM roads in aerial images, thereby providing road widths and enhancing the map's accuracy and applicability to localization. @cite_13 trained a ConvNet-based model to match GSV images with their corresponding location in aerial images. While we construct and leverage a dataset based on OSM and GSV, our model does not attempt to localize the street view image and instead estimates road layout attributes given the surrounding visual scene. | {
"cite_N": [
"@cite_13",
"@cite_12",
"@cite_3",
"@cite_11"
],
"mid": [
"1946093182",
"2460102699",
"2151477068",
"2205800717"
],
"abstract": [
"The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.",
"In this paper we present a robust, efficient and affordable approach to self-localization which does not require neither GPS nor knowledge about the appearance of the world. Towards this goal, we utilize freely available cartographic maps and derive a probabilistic model that exploits semantic cues in the form of sun direction, presence of an intersection, road type, speed limit as well as the ego-car trajectory in order to produce very reliable localization results. Our experimental evaluation shows that our approach can localize much faster (in terms of driving time) with less computation and more robustly than competing approaches, which ignore semantic information.",
"In this paper we propose an approach for global vehicle localization that combines visual odometry with map information from OpenStreetMaps to provide robust and accurate estimates for the vehicle's position. The main contribution of this work comes from the incorporation of the map data as an additional cue into the observation model of a Monte Carlo Localization framework. The resulting approach is able to compensate for the drift that visual odometry accumulates over time, significantly improving localization quality. As our results indicate, the proposed approach outperforms current state-of-the-art visual odometry approaches, indicating in parallel the potential that map data can bring to the global localization task.",
"In recent years, contextual models that exploit maps have been shown to be very effective for many recognition and localization tasks. In this paper we propose to exploit aerial images in order to enhance freely available world maps. Towards this goal, we make use of OpenStreetMap and formulate the problem as the one of inference in a Markov random field parameterized in terms of the location of the road-segment centerlines as well as their width. This parameterization enables very efficient inference and returns only topologically correct roads. In particular, we can segment all OSM roads in the whole world in a single day using a small cluster of 10 computers. Importantly, our approach generalizes very well, it can be trained using only 1.5 km2 aerial imagery and produce very accurate results in any location across the globe. We demonstrate the effectiveness of our approach outperforming the state-of-the-art in two new benchmarks that we collect. We then show how our enhanced maps are beneficial for semantic segmentation of ground images."
]
} |
1611.08258 | 2951001760 | Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization. | @cite_22 @cite_26 , and extract dense regions of candidate proposals from an image using an initial bounding box. To handle the problem of not being able to generate enough candidate proposals because of fixed shape and size, object saliency @cite_34 @cite_38 @cite_24 based approaches were proposed to extract region proposals. Following this, generic objectness measure @cite_25 was employed to extract region proposals. Selective search algorithm @cite_5 , a segmentation based object proposal generation was proposed, which is currently among the most promising techniques used for proposal generation. Recently, @cite_32 proposed an inverse cascade method using various CNN feature maps to localize object proposals in a coarse to fine manner. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_22",
"@cite_32",
"@cite_24",
"@cite_5",
"@cite_34",
"@cite_25"
],
"mid": [
"1575299770",
"2099528205",
"",
"2147347568",
"2020477327",
"",
"1952794764",
"2128715914"
],
"abstract": [
"We propose a novel approach to annotating weakly labelled data. In contrast to many existing approaches that perform annotation by seeking clusters of self-similar exemplars (minimising intra-class variance), we perform image annotation by selecting exemplars that have never occurred before in the much larger, and strongly annotated, negative training set (maximising inter-class variance). Compared to existing methods, our approach is fast, robust, and obtains state of the art results on two challenging data-sets --- voc2007 (all poses), and the msr2 action data-set, where we obtain a 10 increase. Moreover, this use of negative mining complements existing methods, that seek to minimize the intra-class variance, and can be readily integrated with many of them.",
"Weakly supervised discovery of common visual structure in highly variable, cluttered images is a key problem in recognition. We address this problem using deformable part-based models (DPM's) with latent SVM training [6]. These models have been introduced for fully supervised training of object detectors, but we demonstrate that they are also capable of more open-ended learning of latent structure for such tasks as scene recognition and weakly supervised object localization. For scene recognition, DPM's can capture recurring visual elements and salient objects; in combination with standard global image features, they obtain state-of-the-art results on the MIT 67-category indoor scene dataset. For weakly supervised object localization, optimization over latent DPM parameters can discover the spatial extent of objects in cluttered training images without ground-truth bounding boxes. The resulting method outperforms a recent state-of-the-art weakly supervised object localization approach on the PASCAL-07 dataset.",
"",
"In this paper we evaluate the quality of the activation layers of a convolutional neural network (CNN) for the generation of object proposals. We generate hypotheses in a sliding-window fashion over different activation layers and show that the final convolutional layers can find the object of interest with high recall but poor localization due to the coarseness of the feature maps. Instead, the first layers of the network can better localize the object of interest but with a reduced recall. Based on this observation we design a method for proposing object locations that is based on CNN features and that combines the best of both worlds. We build an inverse cascade that, going from the final to the initial convolutional layers of the CNN, selects the most promising object locations and refines their boxes in a coarse-to-fine manner. The method is efficient, because i) it uses the same features extracted for detection, ii) it aggregates features using integral images, and iii) it avoids a dense evaluation of the proposals due to the inverse coarse-to-fine cascade. The method is also accurate, it outperforms most of the previously proposed object proposals approaches and when plugged into a CNN-based detector produces state-of-the-art detection performance.",
"A conventional approach to learning object detectors uses fully supervised learning techniques which assumes that a training image set with manual annotation of object bounding boxes are provided. The manual annotation of objects in large image sets is tedious and unreliable. Therefore, a weakly supervised learning approach is desirable, where the training set needs only binary labels regarding whether an image contains the target object class. In the weakly supervised approach a detector is used to iteratively annotate the training set and learn the object model. We present a novel weakly supervised learning framework for learning an object detector. Our framework incorporates a new initial annotation model to start the iterative learning of a detector and a model drift detection method that is able to detect and stop the iterative learning when the detector starts to drift away from the objects of interest. We demonstrate the effectiveness of our approach on the challenging PASCAL 2007 dataset.",
"",
"Learning a new object class from cluttered training images is very challenging when the location of object instances is unknown. Previous works generally require objects covering a large portion of the images. We present a novel approach that can cope with extensive clutter as well as large scale and appearance variations between object instances. To make this possible we propose a conditional random field that starts from generic knowledge and then progressively adapts to the new class. Our approach simultaneously localizes object instances while learning an appearance model specific for the class. We demonstrate this on the challenging PASCAL VOC 2007 dataset. Furthermore, our method enables to train any state-of-the-art object detector in a weakly supervised fashion, although it would normally require object location annotations.",
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. This includes an innovative cue measuring the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure [17], and the combined measure to perform better than any cue alone. Finally, we show how to sample windows from an image according to their objectness distribution and give an algorithm to employ them as location priors for modern class-specific object detectors. In experiments on PASCAL VOC 07 we show this greatly reduces the number of windows evaluated by class-specific object detectors."
]
} |
1611.08258 | 2951001760 | Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization. | In view of the promising results of CNNs for visual recognition, some recent efforts in weakly supervised classification have been based on CNNs. @cite_0 improved feature discrimination based on a pre-trained CNN. @cite_3 , the same authors improved the performance further by incorporating both localization and classification on a new CNN architecture. @cite_9 proposed a CNN-based convex optimization method to solve the problem to escape from getting stuck in local minima. Their soft similarity between possible regions and clusters was helpful in improving the optimization. @cite_16 introduced a class-specific object proposal generation based on the mask out strategy of @cite_11 , in order to have a reliable initialization. They also proposed their two-stage algorithm, classification adaptation and detection adaptation. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_11"
],
"mid": [
"2186827065",
"1994488211",
"2161381512",
"2441255125",
"2951505120"
],
"abstract": [
"This paper focuses on the problem of object detection when the annotation at training time is restricted to presence or absence of object instances at image level. We present a method based on features extracted from a Convolutional Neural Network and latent SVM that can represent and exploit the presence of multiple object instances in an image. Moreover, the detection of the object instances in the image is improved by incorporating in the learning procedure additional constraints that represent domain-specific knowledge such as symmetry and mutual exclusion. We show that the proposed method outperforms the state-of-the-art in weakly-supervised object detection and object classification on the Pascal VOC 2007 dataset.",
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The suc- cess of CNNs is attributed to their ability to learn rich mid- level image representations as opposed to hand-designed low-level features used in other image classification meth- ods. Learning CNNs, however, amounts to estimating mil- lions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be effi- ciently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred rep- resentation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"We address the problem of weakly supervised object localization where only image-level annotations are available for training. Many existing approaches tackle this problem through object proposal mining. However, a substantial amount of noise in object proposals causes ambiguities for learning discriminative object models. Such approaches are sensitive to model initialization and often converge to an undesirable local minimum. In this paper, we address this problem by progressive domain adaptation with two main steps: classification adaptation and detection adaptation. In classification adaptation, we transfer a pre-trained network to our multi-label classification task for recognizing the presence of a certain object in an image. In detection adaptation, we first use a mask-out strategy to collect class-specific object proposals and apply multiple instance learning to mine confident candidates. We then use these selected object proposals to fine-tune all the layers, resulting in a fully adapted detection network. We extensively evaluate the localization performance on the PASCAL VOC and ILSVRC datasets and demonstrate significant performance improvement over the state-of-the-art methods.",
"This paper introduces self-taught object localization, a novel approach that leverages deep convolutional networks trained for whole-image recognition to localize objects in images without additional human supervision, i.e., without using any ground-truth bounding boxes for training. The key idea is to analyze the change in the recognition scores when artificially masking out different regions of the image. The masking out of a region that includes the object typically causes a significant drop in recognition score. This idea is embedded into an agglomerative clustering technique that generates self-taught localization hypotheses. Our object localization scheme outperforms existing proposal methods in both precision and recall for small number of subwindow proposals (e.g., on ILSVRC-2012 it produces a relative gain of 23.4 over the state-of-the-art for top-1 hypothesis). Furthermore, our experiments show that the annotations automatically-generated by our method can be used to train object detectors yielding recognition results remarkably close to those obtained by training on manually-annotated bounding boxes."
]
} |
1611.08461 | 2781948176 | Short-term tracking is an open and challenging problem for which discriminative correlation filters (DCF) have shown excellent performance. We introduce the channel and spatial reliability concepts to DCF tracking and provide a learning algorithm for its efficient and seamless integration in the filter update and the tracking process. The spatial reliability map adjusts the filter support to the part of the object suitable for tracking. This both allows to enlarge the search region and improves tracking of non-rectangular objects. Reliability scores reflect channel-wise quality of the learned filters and are used as feature weighting coefficients in localization. Experimentally, with only two simple standard feature sets, HoGs and colornames, the novel CSR-DCF method--DCF with channel and spatial reliability--achieves state-of-the-art results on VOT 2016, VOT 2015 and OTB100. The CSR-DCF runs close to real-time on a CPU. | The discriminative correlation filters for object detection date back to the 80's with seminal work of Hester and Casasent @cite_37 . They have been popularized only recently in the tracking community, starting with the @cite_24 MOSSE tracker published in 2010. Using a gray-scale template, MOSSE achieved a state-of-the-art performance on a tracking benchmark @cite_9 at a remarkable processing speed. Significant improvements have been made since and in 2014 the top-performing trackers on a recent benchmark @cite_10 were all from this class of trackers. Improvements of DCFs fall into two categories, application of improved features and conceptual improvements in filter learning. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_37",
"@cite_10"
],
"mid": [
"1964846093",
"2089961441",
"2132065020",
"2186330282"
],
"abstract": [
"Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.",
"Object tracking is one of the most important components in numerous applications of computer vision. While much progress has been made in recent years with efforts on sharing code and datasets, it is of great importance to develop a library and benchmark to gauge the state of the art. After briefly reviewing recent advances of online object tracking, we carry out large scale experiments with various evaluation criteria to understand how these algorithms perform. The test image sequences are annotated with different attributes for performance evaluation and analysis. By analyzing quantitative results, we identify effective approaches for robust tracking and provide potential future research directions in this field.",
"A technique for multiclass optical pattern recognition of different perspective views of an object is described. Each multiclass representation of an object is described as an orthonormal basis function expansion, and a single averaged matched spatial filter is then produced from a weighted linear combination of these functions. The technique is demonstrated for a terminal missile guidance application using IR tank imagery.",
"The Visual Object Tracking challenge 2014, VOT2014, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 38 trackers are presented. The number of tested trackers makes VOT 2014 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2014 challenge that go beyond its VOT2013 predecessor are introduced: (i) a new VOT2014 dataset with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2013 evaluation methodology, (iii) a new unit for tracking speed assessment less dependent on the hardware and (iv) the VOT2014 evaluation toolkit that significantly speeds up execution of experiments. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http: votchallenge.net)."
]
} |
1611.08461 | 2781948176 | Short-term tracking is an open and challenging problem for which discriminative correlation filters (DCF) have shown excellent performance. We introduce the channel and spatial reliability concepts to DCF tracking and provide a learning algorithm for its efficient and seamless integration in the filter update and the tracking process. The spatial reliability map adjusts the filter support to the part of the object suitable for tracking. This both allows to enlarge the search region and improves tracking of non-rectangular objects. Reliability scores reflect channel-wise quality of the learned filters and are used as feature weighting coefficients in localization. Experimentally, with only two simple standard feature sets, HoGs and colornames, the novel CSR-DCF method--DCF with channel and spatial reliability--achieves state-of-the-art results on VOT 2016, VOT 2015 and OTB100. The CSR-DCF runs close to real-time on a CPU. | In the first group, @cite_47 replaced the grayscale templates by HoG @cite_17 , @cite_30 proposed multi-dimensional color attributes and Li and Zhu @cite_28 applied feature combination. Recently, the convolutional network features learned for object detection have been applied @cite_20 @cite_42 @cite_15 , leading to a performance boost, but at a cost of significant speed reduction. | {
"cite_N": [
"@cite_30",
"@cite_15",
"@cite_28",
"@cite_42",
"@cite_47",
"@cite_20",
"@cite_17"
],
"mid": [
"2044986361",
"",
"818325216",
"",
"2154889144",
"2214352687",
"2161969291"
],
"abstract": [
"Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attribute-based evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24 in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second.",
"",
"Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there is still a need to improve the overall tracking capability. In this paper, we presented a very appealing tracker based on the correlation filter framework. To tackle the problem of the fixed template size in kernel correlation filter tracker, we suggest an effective scale adaptive scheme. Moreover, the powerful features including HoG and color-naming are integrated together to further boost the overall tracking performance. The extensive empirical evaluations on the benchmark videos and VOT 2014 dataset demonstrate that the proposed tracker is very promising for the various challenging scenarios. Our method successfully tracked the targets in about 72 videos and outperformed the state-of-the-art trackers on the benchmark dataset with 51 sequences.",
"",
"The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies—any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.",
"Visual object tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we exploit features extracted from deep convolutional neural networks trained on object recognition datasets to improve tracking accuracy and robustness. The outputs of the last convolutional layers encode the semantic information of targets and such representations are robust to significant appearance variations. However, their spatial resolution is too coarse to precisely localize targets. In contrast, earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchies of convolutional layers as a nonlinear counterpart of an image pyramid representation and exploit these multiple levels of abstraction for visual tracking. Specifically, we adaptively learn correlation filters on each convolutional layer to encode the target appearance. We hierarchically infer the maximum response of each layer to locate targets. Extensive experimental results on a largescale benchmark dataset show that the proposed algorithm performs favorably against state-of-the-art methods.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds."
]
} |
1611.08319 | 2950837108 | The emergence of in-vehicle entertainment systems and self-driving vehicles, and the latter's need for high-resolution, up-to-date maps, will bring a further increase in the amount of data vehicles consume. Considering how difficult WiFi offloading in vehicular environments is, the bulk of this additional load will be served by cellular networks. Cellular networks, in turn, will resort to caching at the network edge in order to reduce the strain on their core network, an approach also known as mobile edge computing, or fog computing. In this work, we exploit a real-world, large-scale trace coming from the users of the We-Fi app in order to (i) understand how significant the contribution of vehicular users is to the global traffic demand; (ii) compare the performance of different caching architectures; and (iii) studying how such a performance is influenced by recommendation systems and content locality. We express the price of fog computing through a metric called price-of-fog, accounting for the extra caches to deploy compared to a traditional, centralized approach. We find that fog computing allows a very significant reduction of the load on the core network, and the price thereof is low in all cases and becomes negligible if content demand is location specific. We can therefore conclude that vehicular networks make an excellent case for the transition to mobile-edge caching: thanks to the peculiar features of vehicular demand, we can obtain all the benefits of fog computing, including a reduction of the load on the core network, reducing the disadvantages to a minimum. | An especially relevant application of caching is video streaming. As an example, @cite_8 @cite_15 account for layered video coding techniques, and address the problem of placing the right layers at the right cache -- with @cite_8 also accounting for cooperation between operators. Other works @cite_14 @cite_0 aim at foreseeing the content demand, in order to proactively populate caches @cite_14 or to serve users @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_14",
"@cite_8"
],
"mid": [
"1993518004",
"2080036431",
"2051773775",
"2478542382"
],
"abstract": [
"Online social networks (OSNs) play an increasingly important role today in informing users about content. At the same time, mobile devices provide ubiquitous access to this content through the cellular infrastructure. In this paper, we exploit the fact that the interest in content spreads over OSNs, which makes it, to a certain extent, predictable. We propose Proactive Seeding-a technique for minimizing the peak load of cellular networks, by proactively pushing (“seeding”) content to selected users before they actually request it. We develop a family of algorithms that take as input information primarily about (i) cascades on the OSN and possibly about (ii) the background traffic load in the cellular network and (iii) the local connectivity among mobiles; the algorithms then select which nodes to seed and when. We prove that Proactive Seeding is optimal when the prediction of information cascades is perfect. In realistic simulations, driven by traces from Twitter and cellular networks, we find that Proactive Seeding reduces the peak cellular load by 20 –50 . Finally, we combine Proactive Seeding with techniques that exploit local mobile-to-mobile connections to further reduce the peak load.",
"As mobile networks are witnessing huge growth in the volumes of data traffic from smartphone and tablet. Mobile operators start to look into CDN and caching in order to offload increasingly overloaded mobile networks, reduce network and peering cost, and improve mobile user's quality of experience. For CDN and cache deployment in mobile network, one practical issue is where to deploy CDN or cache function. As mobile network is designed as a tiered and hierarchical structure, the cache or CDN function could be placed at multiple potential points, e.g., GGSN, RNC and etc. In this paper, we propose an economic model that can be used to analyze the cost saving and benefit when cache function is place at different places of mobile network. A real mobile network is studied according to this model. We can see that the best place to store the cached content really depends on the network infrastructure and cost composition of the mobile operator. In addition, other issues, like technical complexity and performance, shall be considered together.",
"This article explores one of the key enablers of beyond 4G wireless networks leveraging small cell network deployments, proactive caching. Endowed with predictive capabilities and harnessing recent developments in storage, context awareness, and social networks, peak traffic demands can be substantially reduced by proactively serving predictable user demands via caching at base stations and users' devices. In order to show the effectiveness of proactive caching, we examine two case studies that exploit the spatial and social structure of the network, where proactive caching plays a crucial role. First, in order to alleviate backhaul congestion, we propose a mechanism whereby files are proactively cached during off-peak periods based on file popularity and correlations among user and file patterns. Second, leveraging social networks and D2D communications, we propose a procedure that exploits the social structure of the network by predicting the set of influential users to (proactively) cache strategic contents and disseminate them to their social ties via D2D communications. Exploiting this proactive caching paradigm, numerical results show that important gains can be obtained for each case study, with backhaul savings and a higher ratio of satisfied users of up to 22 and 26 percent, respectively. Higher gains can be further obtained by increasing the storage capability at the network edge.",
"Distributed caching architectures have been proposed for bringing content close to requesters and the key problem is to design caching algorithms for reducing content delivery delay. The problem obtains an interesting new twist with the advent of advanced layered-video encoding techniques such as Scalable Video Coding (SVC). We show that the problem of finding the caching configuration of video encoding layers that minimizes average delay for a network operator is NP-Hard, and we establish a pseudopolynomial-time optimal solution using a connection with the multiple-choice knapsack problem. We also design caching algorithms for multiple operators that cooperate by pooling together their co-located caches, in an effort to aid each other, so as to avoid large delays due to downloading content from distant servers. We derive an approximate solution to this cooperative caching problem using a technique that partitions the cache capacity into amounts dedicated to own and others' caching needs. Numerical results based on real traces of SVC-encoded videos demonstrate up to 25 reduction in delay over existing (layer-agnostic) caching schemes, with increasing gains as the video popularity distribution gets steeper, and cache capacity increases."
]
} |
1611.08319 | 2950837108 | The emergence of in-vehicle entertainment systems and self-driving vehicles, and the latter's need for high-resolution, up-to-date maps, will bring a further increase in the amount of data vehicles consume. Considering how difficult WiFi offloading in vehicular environments is, the bulk of this additional load will be served by cellular networks. Cellular networks, in turn, will resort to caching at the network edge in order to reduce the strain on their core network, an approach also known as mobile edge computing, or fog computing. In this work, we exploit a real-world, large-scale trace coming from the users of the We-Fi app in order to (i) understand how significant the contribution of vehicular users is to the global traffic demand; (ii) compare the performance of different caching architectures; and (iii) studying how such a performance is influenced by recommendation systems and content locality. We express the price of fog computing through a metric called price-of-fog, accounting for the extra caches to deploy compared to a traditional, centralized approach. We find that fog computing allows a very significant reduction of the load on the core network, and the price thereof is low in all cases and becomes negligible if content demand is location specific. We can therefore conclude that vehicular networks make an excellent case for the transition to mobile-edge caching: thanks to the peculiar features of vehicular demand, we can obtain all the benefits of fog computing, including a reduction of the load on the core network, reducing the disadvantages to a minimum. | Specific to vehicular networks, caching has long been studied in the traditional scenario where no infrastructure support is available @cite_16 @cite_17 , as well as in infrastructure-powered cases @cite_18 , where a sparse coverage by road-side units (RSUs) exists. More recent works deal with caching in the context of content discovery @cite_18 @cite_20 , content distribution @cite_23 and content downloading @cite_4 . A common theme of these works is deploying mobile caches at the vehicles, and exploiting past and future mobility in order to ensure content availability throughout the network. Our scenario features fixed caches; however, vehicular mobility and traffic patterns do play an important role in defining the caching needs at different parts of the topology. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_23",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2009235272",
"2110360261",
"1577824988",
"2017913988",
"2057661320",
"2060067830"
],
"abstract": [
"We address content discovery in wireless networks with infrastructure, where mobile nodes store, advertise, and consume content while Broker entities running on infrastructure devices let demand and offer meet. We refer to this paradigm as match-making, highlighting its features within the confines of the standard publish-and-subscribe paradigm. We study its performance in terms of success probability of a content query, a parameter that we strive to increase by acting as follows: 1) We design a credit-based scheme that makes it convenient for rational users to provide their content (thus discouraging free-riding behavior), and it guarantees them a fair treatment. 2) We increase the availability of either popular or rare content, through an efficient caching scheme. 3) We counter malicious nodes whose objective is to disrupt the system performance by not providing the content they advertise. To counter the latter as well as free riders, we introduce a feedback mechanism that enables a Broker to tell apart well- and misbehaving nodes in a very reliable manner, and to ban the latter. The properties of our match-making scheme are analyzed through game theory. Furthermore, via ns-3 simulations, we show its resilience to different attacks by malicious users and its good performance with respect to other existing solutions.",
"We consider a system where users aboard communication-enabled vehicles are interested in downloading different contents from Internet-based servers. This scenario captures many of the infotainment services that vehicular communication is envisioned to enable, including news reporting, navigation maps, and software updating, or multimedia file downloading. In this paper, we outline the performance limits of such a vehicular content downloading system by modeling the downloading process as an optimization problem, and maximizing the overall system throughput. Our approach allows us to investigate the impact of different factors, such as the roadside infrastructure deployment, the vehicle-to-vehicle relaying, and the penetration rate of the communication technology, even in presence of large instances of the problem. Results highlight the existence of two operational regimes at different penetration rates and the importance of an efficient, yet 2-hop constrained, vehicle-to-vehicle relaying.",
"ICN CCN advocates ubiquitous in-network caching to enhance content distribution. Non-safety application in vehicular communication is emerging beyond the initial safety application. However, it suffers from a typical issue of low delivery ratio in urban environments, where high buildings block and attenuate the radio propagation from RSU infrastructures as well as other technical issues. In this paper, LCE in-network caching strategy with LRU algorithm in vehicular networks is proposed according to traffic characteristics in metropolitan areas. We compare this scheme with the legacy TCP IP based scheme by simulation tools of OMNeT++ & Veins and SUMO. The simulation results validate that the proposed scheme could achieve stronger robustness against obstacles, higher file capture rate and less dependency on RSU infrastructure.",
"Future vehicular networks are expected to deploy short-range communication technology for inter-vehicle communication. In addition to vehicle-to-vehicle communication, users will be interested in accessing the multimedia-rich Internet from within the vehicular network. This motivates a compelling application of Co-operative Networking in the Vehicular Ad-Hoc network where the Ad Hoc network extends and complements the Internet. The broadcast nature of the wireless medium drives us to explore different design paradigms from the ones used in typical wired settings.A new paradigm in content delivery on the Internet using peer-peer swarming protocols is emerging [1,2]. We propose SPAWN, a simple cooperative strategy for content delivery in future vehicular networks. We study the issues involved in using such a strategy from the standpoint of Vehicular Ad-Hoc networks. Several enhancements to a popular swarming protocol (BitTorrent) are discussed including a gossip mechanism that leverages the inherent broadcast nature of the wireless medium, and a piece-selection strategy that uses proximity to exchange pieces quicker. Preliminary results show that SPAWN increases the perceived performance of the network, resulting in faster downloads for popular files.",
"Vehicular communications are becoming an emerging technology for safety control, traffic control, urban monitoring, pollution control, and many other road safety and traffic efficiency applications. All these applications generate a lot of data which should be distributed among communication parties such as vehicles and users in an efficient manner. On the other hand, the generated data cause a significant load on a network infrastructure, which aims at providing uninterrupted services to the communication parties in an urban scenario. To make a balance of load on the network for such situations in the urban scenario, frequently accessed contents should be cached at specified locations either in the vehicles or at some other sites on the infrastructure providing connectivity to the vehicles. However, due to the high mobility and sparse distribution of the vehicles on the road, sometimes, it is not feasible to place the contents on the existing infrastructure, and useful information generated from the vehicles may not be sent to its final destination. To address this issue, in this paper, we propose a new peer-to-peer (P2P) cooperative caching scheme. To minimize the load on the infrastructure, traffic information among vehicles is shared in a P2P manner using a Markov chain model with three states. The replacement of existing data to accommodate newly arrived data is achieved in a probabilistic manner. The probability is calculated using the time to stay in a waiting state and the frequency of access of a particular data item in a given time interval. The performance of the proposed scheme is evaluated in comparison to those of existing schemes with respect to the metrics such as network congestion, query delay, and hit ratio. Analysis results show that the proposed scheme has reduced the congestion and query delay by 30 with an increase in the hit ratio by 20 .",
"There has been increasing interest in the exploitation of advances in information technology in surface transportation systems. One trend is to exploit on-board sensing, computing and communication capabilities in vehicles, e.g., to augment and enhance existing intelligent transportation systems. A natural approach is to use vehicle-to-vehicle communications to disseminate information. In this paper, we propose MDDV, a mobility-centric approach for data dissemination in vehicular networks designed to operate efficiently and reliably despite the highly mobile, partitioned nature of these networks. MDDV is designed to exploit vehicle mobility for data dissemination, and combines the idea of opportunistic forwarding, trajectory based forwarding and geographical forwarding. We develop a generic mobile computing approach for designing localized algorithms in vehicular networks. Vehicles perform local operations based on their own knowledge while they collectively achieve a global behavior. We evaluate the performance of the MDDV algorithm using realistic simulation of the vehicle traffic in Atlanta area."
]
} |
1611.08480 | 2558151686 | Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations ( and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data. | Most approaches to parallelization of MCSVM training are based on OVO or OVR @cite_25 , including a number of approaches that attempt to learn a hierarchy of labels @cite_22 @cite_14 @cite_23 @cite_17 @cite_19 @cite_10 or train ensembles of SVMs on individual subsets of the data @cite_26 @cite_15 @cite_13 @cite_20 . | {
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"1537017777",
"2157065343",
"2117988422",
"2155144632",
"2162657744",
"2025862220",
"2007573800",
"756166754",
"2517520041",
"2115046017",
"2106854428"
],
"abstract": [
"Support vector machines (SVM) is originally designed for binary classification. To extend it to multi-class scenario, a typical conventional way is to decompose an M-class problem into a series of two-class problems, for which one-against-all is the earliest and one of the most widely used implementations. However, certain theoretical analysis reveals a drawback, i.e., the competence of each classifier is totally neglected when the results of classification from the multiple classifiers are combined for the final decision. To overcome this limitation, this paper introduces reliability measures into the multi-class framework. Two measures are designed: static reliability measure (SRM) and dynamic reliability measure (DRM). SRM works on a collective basis and yields a constant value regardless of the location of the test sample. DRM, on the other hand, accounts for the spatial variation of the classifier's performance. Based on these two reliability measures, a new decision strategy for the one-against-all method is proposed, which is tested on benchmark data sets and demonstrates its effectiveness.",
"We present a novel approach to efficiently learn a label tree for large scale classification with many classes. The key contribution of the approach is a technique to simultaneously determine the structure of the tree and learn the classifiers for each node in the tree. This approach also allows fine grained control over the efficiency vs accuracy trade-off in designing a label tree, leading to more balanced trees. Experiments are performed on large scale image classification with 10184 classes and 9 million images. We demonstrate significant improvements in test accuracy and efficiency with less training time and more balanced trees compared to the previous state of the art by",
"Data mining algorithms are originally designed by assuming the data is available at one centralized site. These algorithms also assume that the whole data is fit into main memory while running the algorithm. But in today's scenario the data has to be handled is distributed even geographically. Bringing the data into a centralized site is a bottleneck in terms of the bandwidth when compared with the size of the data. In this paper for multiclass SVM we propose an algorithm which builds a global SVM model by merging the local SVMs using a distributed approach(DSVM). And the global SVM will be communicated to each site and made it available for further classification. The experimental analysis has shown promising results with better accuracy when compared with both the centralized and ensemble method. The time complexity is also reduced drastically because of the parallel construction of local SVMs. The experiments are conducted by considering the data sets of size 100s to hundred of 100s which also addresses the issue of scalability.",
"Multi-class classification becomes challenging at test time when the number of classes is very large and testing against every possible class can become computationally infeasible. This problem can be alleviated by imposing (or learning) a structure over the set of classes. We propose an algorithm for learning a tree-structure of classifiers which, by optimizing the overall tree loss, provides superior accuracy to existing tree labeling methods. We also propose a method that learns to embed labels in a low dimensional space that is faster than non-embedding approaches and has superior accuracy to existing embedding approaches. Finally we combine the two ideas resulting in the label embedding tree that outperforms alternative methods including One-vs-Rest while being orders of magnitude faster.",
"We consider multiclass classification problems where the set of labels are organized hierarchically as a category tree. We associate each node in the tree with a classifier and classify the examples recursively from the root to the leaves. We propose a hierarchical Support Vector Machine (SVM) that encourages the classifier at each node to be different from the classifiers at its ancestors. More specifically, we introduce regularizations that force the normal vector of the classifying hyperplane at each node to be orthogonal to those at its ancestors as much as possible. We establish conditions under which training such a hierarchical SVM is a convex optimization problem, and develop an efficient dual-averaging method for solving it.",
"In the real visual world, the number of categories a classifier needs to discriminate is on the order of hundreds or thousands. For example, the SUN dataset [24] contains 899 scene categories and ImageNet [6] has 15,589 synsets. Designing a multiclass classifier that is both accurate and fast at test time is an extremely important problem in both machine learning and computer vision communities. To achieve a good trade-off between accuracy and speed, we adopt the relaxed hierarchy structure from [15], where a set of binary classifiers are organized in a tree or DAG (directed acyclic graph) structure. At each node, classes are colored into positive and negative groups which are separated by a binary classifier while a subset of confusing classes is ignored. We color the classes and learn the induced binary classifier simultaneously using a unified and principled max-margin optimization. We provide an analysis on generalization error to justify our design. Our method has been tested on both Caltech-256 (object recognition) [9] and the SUN dataset (scene classification) [24], and shows significant improvement over existing methods.",
"With data sizes constantly expanding, and with classical machine learning algorithms that analyze such data requiring larger and larger amounts of computation time and storage space, the need to distribute computation and memory requirements among several computers has become apparent. Although substantial work has been done in developing distributed binary SVM algorithms and multi-class SVM algorithms individually, the field of multi-class distributed SVMs remains largely unexplored. This research proposes a novel algorithm that implements the Support Vector Machine over a multi-class dataset and is efficient in a distributed environment (here, Hadoop). The idea is to divide the dataset into half recursively and thus compute the optimal Support Vector Machine for this half during the training phase, much like a divide and conquer approach. While testing, this structure has been effectively exploited to significantly reduce the prediction time. Our algorithm has shown better computation time during the prediction phase than the traditional sequential SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of the dataset grows. This approach also classifies the data with higher accuracy than the traditional multi-class algorithms.",
"LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.",
"",
"We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results.",
"We study the problem of multiclass classification with an extremely large number of classes (k), with the goal of obtaining train and test time complexity logarithmic in the number of classes. We develop top-down tree construction approaches for constructing logarithmic depth trees. On the theoretical front, we formulate a new objective function, which is optimized at each node of the tree and creates dynamic partitions of the data which are both pure (in terms of class labels) and balanced. We demonstrate that under favorable conditions, we can construct logarithmic depth trees that have leaves with low label entropy. However, the objective function at the nodes is challenging to optimize computationally. We address the empirical problem with a new online decision tree construction procedure. Experiments demonstrate that this online algorithm quickly achieves improvement in test error compared to more common logarithmic training time approaches, which makes it a plausible method in computationally constrained large-k applications."
]
} |
1611.08480 | 2558151686 | Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations ( and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data. | There is a line of research on parallelizing stochastic gradient (SGD) based training of MC-SVMs over multiple computers @cite_0 @cite_30 . SGD builds on iteratively approximating the loss term by one that is based on a subset of the data (mini-batch). In contrast, batch solvers (such as the ones proposed in the present paper) are based on the full sample. In this sense, our approach is completely different to SGD. While there is a long ongoing discussion whether the batch or the SGD approach is superior, the common opinion is that SGD has its advantages in the early phase of the optimization, while classical batch solvers shine in the later phase. In this sense, the two approaches are complementary and could also be combined. | {
"cite_N": [
"@cite_0",
"@cite_30"
],
"mid": [
"2151166364",
"1975742283"
],
"abstract": [
"Classification problems with thousands or more classes often have a large range of class-confusabilities, and we show that the more-confusable classes add more noise to the empirical loss that is minimized during training. We propose an online solution that reduces the effect of highly confusable classes in training the classifier parameters, and focuses the training on pairs of classes that are easier to differentiate at any given time in the training. We also show that the adagrad method, recently proposed for automatically decreasing step sizes for convex stochastic gradient descent, can also be profitably applied to the nonconvex joint training of supervised dimensionality reduction and linear classifiers as done in Wsabie. Experiments on ImageNet benchmark data sets and proprietary image recognition problems with 15,000 to 97,000 classes show substantial gains in classification accuracy compared to one-vs-all linear SVMs and Wsabie.",
"The new parallel multiclass stochastic gradient descent algorithms aim at classifying million images with very-high-dimensional signatures into thousands of classes. We extend the stochastic gradient descent (SGD) for support vector machines (SVM-SGD) in several ways to develop the new multiclass SVM-SGD for efficiently classifying large image datasets into many classes. We propose (1) a balanced training algorithm for learning binary SVM-SGD classifiers, and (2) a parallel training process of classifiers with several multi-core computers grid. The evaluation on 1000 classes of ImageNet, ILSVRC 2010 shows that our algorithm is 270 times faster than the state-of-the-art linear classifier LIBLINEAR."
]
} |
1611.08480 | 2558151686 | Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations ( and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data. | The related work that is the most closest to the present work is by @cite_3 . They build on the alternating direction method of multipliers (ADMM) @cite_16 to break the Crammer and Singer optimization problem into smaller parts, which can be solved individually on different computers. In contrast to our approach, the optimization problem is parallelized over the samples, not the optimization variables. In our problem setting, high-dimensional sparse data, the model size is vary large. Because each node holds the whole model in memory, this algorithm hardly scales with large label spaces. E.g. consider Table ; the model for LSHTC-2011 contains @math parameters. Note also that it is unclear at this point whether the approach of @cite_3 could be adapted to LLW and WW, which are the object of study in the present paper. | {
"cite_N": [
"@cite_16",
"@cite_3"
],
"mid": [
"2164278908",
"1999428432"
],
"abstract": [
"Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.",
"We present an algorithm and implementation for distributed parallel training of single-machine multiclass SVMs. While there is ongoing and healthy debate about the best strategy for multiclass classification, there are some features of the single-machine approach that are not available when training alternatives such as one-vs-all, and that are quite complex for tree based methods. One obstacle to exploring single-machine approaches on large datasets is that they are usually limited to running on a single machine! We build on a framework borrowed from parallel convex optimization — the alternating direction method of multipliers (ADMM) — to develop a new consensus based algorithm for distributed training of single-machine approaches. This is demonstrated with an implementation of our novel sequential dual algorithm (DCMSVM) which allows distributed parallel training with small communication requirements. Benchmark results show significant reduction in wall clock time compared to current state of the art multiclass SVM implementation (Liblinear) on a single node. Experiments are performed on large scale image classification including results with modern high-dimensional features."
]
} |
1611.08480 | 2558151686 | Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way. Given enough computational resources, one-vs.-rest SVMs can thus be trained on data involving a large number of classes. The same cannot be stated, however, for the so-called all-in-one SVMs, which require solving a quadratic program of size quadratically in the number of classes. We develop distributed algorithms for two all-in-one SVM formulations ( and Weston and Watkins) that parallelize the computation evenly over the number of classes. This allows us to compare these models to one-vs.-rest SVMs on unprecedented scale. The results indicate superior accuracy on text classification data. | Note that beyond SVMs there is a large body of work on distributed multi-class [e.g.,][] vw,gopal2013distributed and multi-label learning algorithms @cite_21 , which is outside of the scope of the present paper. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2068074736"
],
"abstract": [
"The objective in extreme multi-label classification is to learn a classifier that can automatically tag a data point with the most relevant subset of labels from a large label set. Extreme multi-label classification is an important research problem since not only does it enable the tackling of applications with many labels but it also allows the reformulation of ranking problems with certain advantages over existing formulations. Our objective, in this paper, is to develop an extreme multi-label classifier that is faster to train and more accurate at prediction than the state-of-the-art Multi-label Random Forest (MLRF) algorithm [2] and the Label Partitioning for Sub-linear Ranking (LPSR) algorithm [35]. MLRF and LPSR learn a hierarchy to deal with the large number of labels but optimize task independent measures, such as the Gini index or clustering error, in order to learn the hierarchy. Our proposed FastXML algorithm achieves significantly higher accuracies by directly optimizing an nDCG based ranking loss function. We also develop an alternating minimization algorithm for efficiently optimizing the proposed formulation. Experiments reveal that FastXML can be trained on problems with more than a million labels on a standard desktop in eight hours using a single core and in an hour using multiple cores."
]
} |
1611.08107 | 2558999206 | Training data are critical in face recognition systems. However, labeling a large scale face data for a particular domain is very tedious. In this paper, we propose a method to automatically and incrementally construct datasets from massive weakly labeled data of the target domain which are readily available on the Internet under the help of a pretrained face model. More specifically, given a large scale weakly labeled dataset in which each face image is associated with a label, i.e. the name of an identity, we create a graph for each identity with edges linking matched faces verified by the existing model under a tight threshold. Then we use the maximal subgraph as the cleaned data for that identity. With the cleaned dataset, we update the existing face model and use the new model to filter the original dataset to get a larger cleaned dataset. We collect a large weakly labeled dataset containing 530,560 Asian face images of 7,962 identities from the Internet, which will be published for the study of face recognition. By running the filtering process, we obtain a cleaned datasets (99.7+ purity) of size 223,767 (recall 70.9 ). On our testing dataset of Asian faces, the model trained by the cleaned dataset achieves recognition rate 93.1 , which obviously outperforms the model trained by the public dataset CASIA whose recognition rate is 85.9 . | Face datasets play a critical role for face recognition. In the early years, face datasets are relatively small and obtained in controlled environments, e.g. PIE @cite_6 , FERRET @cite_8 which are designed to study the effect of particular parameters. In order to reflect the real-world challenges of face recognition, Huang built a dataset named LFW, i.e. labeled face in the wild @cite_18 , which contains 13,233 images with 5749 subjects, collected from the Internet with large variance in pose, light and view condition. This dataset has greatly advanced the progress of face community. Using the name list of LFW, constructed a larger dataset, called YTF @cite_19 from the videos of YouTube. As the videos are highly compressed, YTF provides an image set of lower quality for performance evaluation. In order to study the problem of face recognition across ages, researchers also constructed a dataset, called CACD @cite_17 . It includes 163,446 images of 2,000 subjects. However, only a small part of this dataset was manually checked. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_6",
"@cite_19",
"@cite_17"
],
"mid": [
"1782590233",
"2033419168",
"2155759509",
"2019464758",
""
],
"abstract": [
"Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version.",
"Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.",
"Between October 2000 and December 2000, we collected a database of over 40,000 facial images of 68 people. Using the CMU (Carnegie Mellon University) 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this database the CMU Pose, Illumination and Expression (PIE) database. In this paper, we describe the imaging hardware, the collection procedure, the organization of the database, several potential uses of the database, and how to obtain the database.",
"Recognizing faces in unconstrained videos is a task of mounting importance. While obviously related to face recognition in still images, it has its own unique characteristics and algorithmic requirements. Over the years several methods have been suggested for this problem, and a few benchmark data sets have been assembled to facilitate its study. However, there is a sizable gap between the actual application needs and the current state of the art. In this paper we make the following contributions. (a) We present a comprehensive database of labeled videos of faces in challenging, uncontrolled conditions (i.e., ‘in the wild’), the ‘YouTube Faces’ database, along with benchmark, pair-matching tests1. (b) We employ our benchmark to survey and compare the performance of a large variety of existing video face recognition techniques. Finally, (c) we describe a novel set-to-set similarity measure, the Matched Background Similarity (MBGS). This similarity is shown to considerably improve performance on the benchmark tests.",
""
]
} |
1611.08107 | 2558999206 | Training data are critical in face recognition systems. However, labeling a large scale face data for a particular domain is very tedious. In this paper, we propose a method to automatically and incrementally construct datasets from massive weakly labeled data of the target domain which are readily available on the Internet under the help of a pretrained face model. More specifically, given a large scale weakly labeled dataset in which each face image is associated with a label, i.e. the name of an identity, we create a graph for each identity with edges linking matched faces verified by the existing model under a tight threshold. Then we use the maximal subgraph as the cleaned data for that identity. With the cleaned dataset, we update the existing face model and use the new model to filter the original dataset to get a larger cleaned dataset. We collect a large weakly labeled dataset containing 530,560 Asian face images of 7,962 identities from the Internet, which will be published for the study of face recognition. By running the filtering process, we obtain a cleaned datasets (99.7+ purity) of size 223,767 (recall 70.9 ). On our testing dataset of Asian faces, the model trained by the cleaned dataset achieves recognition rate 93.1 , which obviously outperforms the model trained by the public dataset CASIA whose recognition rate is 85.9 . | Recently, with the success of deep models, the community has begun to use large scale datasets to train their networks. Typical datasets include CelebFace of CUHK @cite_11 , SFC of Facebook @cite_16 and WDRef of Microsoft @cite_13 . However, these datasets are all not public, which makes the fairly comparison of different models very difficult. | {
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2145287260",
"170472577",
""
],
"abstract": [
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"In this paper, we revisit the classical Bayesian face recognition method by Baback and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this \"difference\" formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4 test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10 .",
""
]
} |
1611.08107 | 2558999206 | Training data are critical in face recognition systems. However, labeling a large scale face data for a particular domain is very tedious. In this paper, we propose a method to automatically and incrementally construct datasets from massive weakly labeled data of the target domain which are readily available on the Internet under the help of a pretrained face model. More specifically, given a large scale weakly labeled dataset in which each face image is associated with a label, i.e. the name of an identity, we create a graph for each identity with edges linking matched faces verified by the existing model under a tight threshold. Then we use the maximal subgraph as the cleaned data for that identity. With the cleaned dataset, we update the existing face model and use the new model to filter the original dataset to get a larger cleaned dataset. We collect a large weakly labeled dataset containing 530,560 Asian face images of 7,962 identities from the Internet, which will be published for the study of face recognition. By running the filtering process, we obtain a cleaned datasets (99.7+ purity) of size 223,767 (recall 70.9 ). On our testing dataset of Asian faces, the model trained by the cleaned dataset achieves recognition rate 93.1 , which obviously outperforms the model trained by the public dataset CASIA whose recognition rate is 85.9 . | In order to fill this gap, a large scale public dataset, CASIA @cite_12 was provided by This dataset contains 500,000 images of 10,000 celebrities collected from IMDb website. Similarly, an even larger dataset, called MS-Celeb-1M has been proposed to advance the community @cite_15 . A common property of these datasets is that the face images are usually from western celebrities and models trained by these datasets are less optimal on eastern faces. | {
"cite_N": [
"@cite_15",
"@cite_12"
],
"mid": [
"2952419167",
"1509966554"
],
"abstract": [
"In this paper, we design a benchmark task and provide the associated datasets for recognizing face images and link them to corresponding entity keys in a knowledge base. More specifically, we propose a benchmark task to recognize one million celebrities from their face images, by using all the possibly collected face images of this individual on the web as training data. The rich information provided by the knowledge base helps to conduct disambiguation and improve the recognition accuracy, and contributes to various real-world applications, such as image captioning and news video analysis. Associated with this task, we design and provide concrete measurement set, evaluation protocol, as well as training data. We also present in details our experiment setup and report promising baseline results. Our benchmark task could lead to one of the largest classification problems in computer vision. To the best of our knowledge, our training dataset, which contains 10M images in version 1, is the largest publicly available one in the world.",
"Pushing by big data and deep convolutional neural network (CNN), the performance of face recognition is becoming comparable to human. Using private large scale training datasets, several groups achieve very high performance on LFW, i.e., 97 to 99 . While there are many open source implementations of CNN, none of large scale face dataset is publicly available. The current situation in the field of face recognition is that data is more important than algorithm. To solve this problem, this paper proposes a semi-automatical way to collect face images from Internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images, called CASIAWebFace. Based on the database, we use a 11-layer CNN to learn discriminative representation and obtain state-of-theart accuracy on LFW and YTF. The publication of CASIAWebFace will attract more research groups entering this field and accelerate the development of face recognition in the wild."
]
} |
1611.08107 | 2558999206 | Training data are critical in face recognition systems. However, labeling a large scale face data for a particular domain is very tedious. In this paper, we propose a method to automatically and incrementally construct datasets from massive weakly labeled data of the target domain which are readily available on the Internet under the help of a pretrained face model. More specifically, given a large scale weakly labeled dataset in which each face image is associated with a label, i.e. the name of an identity, we create a graph for each identity with edges linking matched faces verified by the existing model under a tight threshold. Then we use the maximal subgraph as the cleaned data for that identity. With the cleaned dataset, we update the existing face model and use the new model to filter the original dataset to get a larger cleaned dataset. We collect a large weakly labeled dataset containing 530,560 Asian face images of 7,962 identities from the Internet, which will be published for the study of face recognition. By running the filtering process, we obtain a cleaned datasets (99.7+ purity) of size 223,767 (recall 70.9 ). On our testing dataset of Asian faces, the model trained by the cleaned dataset achieves recognition rate 93.1 , which obviously outperforms the model trained by the public dataset CASIA whose recognition rate is 85.9 . | Compared to datasets, face recognition has gained much more attention from shallow models to deep models. The shallow models, e.g. Eigen Face @cite_2 , Fisher Face @cite_1 , Gabor based LDA @cite_14 and LBP based LDA @cite_0 usually rely on raw pixels or hand-crafted features and are evaluated on early datasets in controlled environments. Recently, a set of deep face models have been proposed and greatly advanced the progress @cite_16 @cite_3 @cite_12 @cite_5 . Deep face @cite_16 applies 3D alignment to warp faces to frontal views and learn deep face representations with 4,000 subjects. DeepID @cite_3 uses a set of small networks with each network observing a patch of the face region for recognition. FaceNet @cite_5 is another deep face model proposed recently, which are trained by relative distance constraints with one large network. Using a huge dataset, FaceNet achieves 99.6 | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"2121647436",
"",
"",
"2138451337",
"2096733369",
"2145287260",
"1509966554"
],
"abstract": [
"",
"We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed \"Fisherface\" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.",
"",
"",
"We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as \"eigenfaces,\" because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.",
"Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"Pushing by big data and deep convolutional neural network (CNN), the performance of face recognition is becoming comparable to human. Using private large scale training datasets, several groups achieve very high performance on LFW, i.e., 97 to 99 . While there are many open source implementations of CNN, none of large scale face dataset is publicly available. The current situation in the field of face recognition is that data is more important than algorithm. To solve this problem, this paper proposes a semi-automatical way to collect face images from Internet and builds a large scale dataset containing about 10,000 subjects and 500,000 images, called CASIAWebFace. Based on the database, we use a 11-layer CNN to learn discriminative representation and obtain state-of-theart accuracy on LFW and YTF. The publication of CASIAWebFace will attract more research groups entering this field and accelerate the development of face recognition in the wild."
]
} |
1611.08107 | 2558999206 | Training data are critical in face recognition systems. However, labeling a large scale face data for a particular domain is very tedious. In this paper, we propose a method to automatically and incrementally construct datasets from massive weakly labeled data of the target domain which are readily available on the Internet under the help of a pretrained face model. More specifically, given a large scale weakly labeled dataset in which each face image is associated with a label, i.e. the name of an identity, we create a graph for each identity with edges linking matched faces verified by the existing model under a tight threshold. Then we use the maximal subgraph as the cleaned data for that identity. With the cleaned dataset, we update the existing face model and use the new model to filter the original dataset to get a larger cleaned dataset. We collect a large weakly labeled dataset containing 530,560 Asian face images of 7,962 identities from the Internet, which will be published for the study of face recognition. By running the filtering process, we obtain a cleaned datasets (99.7+ purity) of size 223,767 (recall 70.9 ). On our testing dataset of Asian faces, the model trained by the cleaned dataset achieves recognition rate 93.1 , which obviously outperforms the model trained by the public dataset CASIA whose recognition rate is 85.9 . | Transfer learning has been long studied due to its importance in practice @cite_4 . Recently, several approaches have also been proposed for face verification. Xudong proposed to use Joint Bayesian model with KL regularization where only a limited number of training examples of target domain are available @cite_9 . proposed an information-theoretic approach to narrow the representation gap between photos and sketches @cite_7 . Though the direct output of our method is a dataset, we can obtain a new model immediately by applying this dataset to train or finetune a model, which serves the same goal as transfer learning. | {
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_7"
],
"mid": [
"",
"2103519186",
"2034136097"
],
"abstract": [
"",
"To learn a new visual category from few examples, prior knowledge from unlabeled data as well as previous related categories may be useful. We develop a new method for transfer learning which exploits available unlabeled data and an arbitrary kernel function; we form a representation based on kernel distances to a large set of unlabeled data points. To transfer knowledge from previous related problems we observe that a category might be learnable using only a small subset of reference prototypes. Related problems may share a significant number of relevant prototypes; we find such a concise representation by performing a joint loss minimization over the training sets of related problems with a shared regularization penalty that minimizes the total number of prototypes involved in the approximation. This optimization problem can be formulated as a linear program that can be solved efficiently. We conduct experiments on a news-topic prediction task where the goal is to predict whether an image belongs to a particular news topic. Our results show that when only few examples are available for training a target topic, leveraging knowledge learnt from other topics can significantly improve performance.",
"Automatic face photo-sketch recognition has important applications for law enforcement. Recent research has focused on transforming photos and sketches into the same modality for matching or developing advanced classification algorithms to reduce the modality gap between features extracted from photos and sketches. In this paper, we propose a new inter-modality face recognition approach by reducing the modality gap at the feature extraction stage. A new face descriptor based on coupled information-theoretic encoding is used to capture discriminative local face structures and to effectively match photos and sketches. Guided by maximizing the mutual information between photos and sketches in the quantized feature spaces, the coupled encoding is achieved by the proposed coupled information-theoretic projection tree, which is extended to the randomized forest to further boost the performance. We create the largest face sketch database including sketches of 1, 194 people from the FERET database. Experiments on this large scale dataset show that our approach significantly outperforms the state-of-the-art methods."
]
} |
1611.08307 | 2557805692 | To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past. | Previous code suggestion work using methods from statistical NLP has mostly focused on @math -gram models. Much of this work is inspired by @cite_6 who argued that real programs fall into a much smaller space than the flexibility of programming languages allows. They were able to capture the repetitiveness and predictable statistical properties of real programs using language models. Subsequently, @cite_9 improved upon hindle-barr 's work by adding a cache mechanism that allowed them to exploit locality stemming from the specialisation and decoupling of program modules. tu-su-localness 's idea of adding a cache mechanism to the language model is specifically designed to exploit the properties of source code, and thus follows the same aim as the sparse attention mechanism introduced in this paper. | {
"cite_N": [
"@cite_9",
"@cite_6"
],
"mid": [
"2165747537",
"2142403498"
],
"abstract": [
"The n-gram language model, which has its roots in statistical natural language processing, has been shown to successfully capture the repetitive and predictable regularities (“naturalness\") of source code, and help with tasks such as code suggestion, porting, and designing assistive coding devices. However, we show in this paper that this natural-language-based model fails to exploit a special property of source code: localness. We find that human-written programs are localized: they have useful local regularities that can be captured and exploited. We introduce a novel cache language model that consists of both an n-gram and an added “cache\" component to exploit localness. We show empirically that the additional cache component greatly improves the n-gram approach by capturing the localness of software, as measured by both cross-entropy and suggestion accuracy. Our model’s suggestion accuracy is actually comparable to a state-of-the-art, semantically augmented language model; but it is simpler and easier to implement. Our cache language model requires nothing beyond lexicalization, and thus is applicable to all programming languages.",
"Natural languages like English are rich, complex, and powerful. The highly creative and graceful use of languages like English and Tamil, by masters like Shakespeare and Avvaiyar, can certainly delight and inspire. But in practice, given cognitive constraints and the exigencies of daily life, most human utterances are far simpler and much more repetitive and predictable. In fact, these utterances can be very usefully modeled using modern statistical methods. This fact has led to the phenomenal success of statistical approaches to speech recognition, natural language translation, question-answering, and text mining and comprehension. We begin with the conjecture that most software is also natural, in the sense that it is created by humans at work, with all the attendant constraints and limitations — and thus, like natural language, it is also likely to be repetitive and predictable. We then proceed to ask whether a) code can be usefully modeled by statistical language models and b) such models can be leveraged to support software engineers. Using the widely adopted n-gram model, we provide empirical evidence supportive of a positive answer to both these questions. We show that code is also very repetitive, and in fact even more so than natural languages. As an example use of the model, we have developed a simple code completion engine for Java that, despite its simplicity, already improves Eclipse's built-in completion capability. We conclude the paper by laying out a vision for future research in this area."
]
} |
1611.08307 | 2557805692 | To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past. | While the majority of preceding work trained on small corpora, @cite_2 created a corpus of @math M lines of Java code which they analysed with @math -gram language models. The size of the corpus allowed them to train a single language model that was effective across multiple different project domains. @cite_8 later demonstrated that neural language models outperform @math -gram models for code suggestion. They compared various @math -gram models (up to nine grams), including tu-su-localness 's cache model, with a basic RNN neural language model. @cite_7 compared white-toward-deep-learning 's basic RNN with LSTMs and found that the latter are better at code suggestion due to their improved ability to learn long-range dependencies found in source code. Our paper extends this line of work by introducing a sparse attention model that captures even longer dependencies. | {
"cite_N": [
"@cite_8",
"@cite_7",
"@cite_2"
],
"mid": [
"2018389835",
"",
"2148190602"
],
"abstract": [
"Deep learning subsumes algorithms that automatically learn compositional representations. The ability of these models to generalize well has ushered in tremendous advances in many fields such as natural language processing (NLP). Recent research in the software engineering (SE) community has demonstrated the usefulness of applying NLP techniques to software corpora. Hence, we motivate deep learning for software language modeling, highlighting fundamental differences between state-of-the-practice software language models and connectionist models. Our deep learning models are applicable to source code files (since they only require lexically analyzed source code written in any programming language) and other types of artifacts. We show how a particular deep learning model can remember its state to effectively model sequential data, e.g., Streaming software tokens, and the state is shown to be much more expressive than discrete tokens in a prefix. Then we instantiate deep learning models and show that deep learning induces high-quality models compared to n-grams and cache-based n-grams on a corpus of Java projects. We experiment with two of the models' hyper parameters, which govern their capacity and the amount of context they use to inform predictions, before building several committees of software language models to aid generalization. Then we apply the deep learning models to code suggestion and demonstrate their effectiveness at a real SE task compared to state-of-the-practice models. Finally, we propose avenues for future work, where deep learning can be brought to bear to support model-based testing, improve software lexicons, and conceptualize software artifacts. Thus, our work serves as the first step toward deep learning software repositories.",
"",
"The tens of thousands of high-quality open source software projects on the Internet raise the exciting possibility of studying software development by finding patterns across truly large source code repositories. This could enable new tools for developing code, encouraging reuse, and navigating large projects. In this paper, we build the first giga-token probabilistic language model of source code, based on 352 million lines of Java. This is 100 times the scale of the pioneering work by The giga-token model is significantly better at the code suggestion task than previous models. More broadly, our approach provides a new “lens” for analyzing software projects, enabling new complexity metrics based on statistical analysis of large corpora. We call these metrics data-driven complexity metrics. We propose new metrics that measure the complexity of a code module and the topical centrality of a module to a software project. In particular, it is possible to distinguish reusable utility classes from classes that are part of a program's core logic based solely on general information theoretic criteria."
]
} |
1611.08307 | 2557805692 | To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past. | An alternative to our purely lexical approach to code suggestion involves the use of probabilistic context-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined, deterministic parsers available for source code. These were used by @cite_1 to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to model context-dependent rules of programming languages such as that variables need to be declared before being used. @cite_3 added context-aware variables to their PCFG model in order to capture such rules. | {
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2116272605",
"2962725091"
],
"abstract": [
"We present the first method for automatically mining code idioms from a corpus of previously written, idiomatic software projects. We take the view that a code idiom is a syntactic fragment that recurs across projects and has a single semantic purpose. Idioms may have metavariables, such as the body of a for loop. Modern IDEs commonly provide facilities for manually defining idioms and inserting them on demand, but this does not help programmers to write idiomatic code in languages or using libraries with which they are unfamiliar. We present Haggis, a system for mining code idioms that builds on recent advanced techniques from statistical natural language processing, namely, nonparametric Bayesian probabilistic tree substitution grammars. We apply Haggis to several of the most popular open source projects from GitHub. We present a wide range of evidence that the resulting idioms are semantically meaningful, demonstrating that they do indeed recur across software projects and that they occur more frequently in illustrative code examples collected from a Q&A site. Manual examination of the most common idioms indicate that they describe important program concepts, including object creation, exception handling, and resource management.",
"We study the problem of building generative models of natural source code (NSC); that is, source code written by humans and meant to be understood by humans. Our primary contribution is to describe new generative models that are tailored to NSC. The models are based on probabilistic context free grammars (PCFGs) and neuro-probabilistic language models (Mnih & Teh, 2012), which are extended to incorporate additional source code-specific structure. These models can be efficiently trained on a corpus of source code and outperform a variety of less structured baselines in terms of predictive log likelihoods on held-out data."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.