aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1703.00366 | 2952005775 | Information about people's movements and the locations they visit enables an increasing number of mobility analytics applications, e.g., in the context of urban and transportation planning, In this setting, rather than collecting or sharing raw data, entities often use aggregation as a privacy protection mechanism, aiming to hide individual users' location traces. Furthermore, to bound information leakage from the aggregates, they can perturb the input of the aggregation or its output to ensure that these are differentially private. In this paper, we set to evaluate the impact of releasing aggregate location time-series on the privacy of individuals contributing to the aggregation. We introduce a framework allowing us to reason about privacy against an adversary attempting to predict users' locations or recover their mobility patterns. We formalize these attacks as inference problems, and discuss a few strategies to model the adversary's prior knowledge based on the information she may have access to. We then use the framework to quantify the privacy loss stemming from aggregate location data, with and without the protection of differential privacy, using two real-world mobility datasets. We find that aggregates do leak information about individuals' punctual locations and mobility profiles. The density of the observations, as well as timing, play important roles, e.g., regular patterns during peak hours are better protected than sporadic movements. Finally, our evaluation shows that both output and input perturbation offer little additional protection, unless they introduce large amounts of noise ultimately destroying the utility of the data. | Attacks on Location Privacy. Prior work presenting attacks on location privacy mostly focuses on inferring users' whereabouts from access to individuals' location data, whether obfuscated or not. Some show that both anonymization and k-anonymity-based mechanisms are ineffective at protecting privacy @cite_29 @cite_35 @cite_6 @cite_38 @cite_5 . (Also see surveys by Krumm @cite_7 and Ghinita @cite_21 ). More recently, researchers analyzed the protection provided by location proximity schemes adopted by social networks @cite_14 @cite_15 @cite_13 , confirming that mechanisms like cloaking or naive perturbation are also unsuccessful. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_14",
"@cite_7",
"@cite_29",
"@cite_21",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_13"
],
"mid": [
"",
"2120911939",
"2026553318",
"2170166043",
"1536564267",
"1996587544",
"2126729912",
"",
"",
""
],
"abstract": [
"",
"It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.",
"Location proximity schemes have been adopted by social networks and other smartphone apps as a means of balancing user privacy with utility. However, misconceptions about the privacy offered by proximity services have rendered users vulnerable to trilateration attacks that can expose their location. Such attacks have received major publicity. and, as a result, popular service providers have deployed countermeasures for preventing user discovery attacks. In this paper, we systematically assess the effectiveness of the defenses that proximity services have deployed against adversaries attempting to identify a user's location. We provide the theoretical foundation for formalizing the problem under different proximity models, design practical attacks for each case, and prove tight bounds on the number of queries required for carrying out the attacks. To evaluate the completeness of our approach, we conduct extensive experiments against popular services. While we identify a diverse set of defense techniques that prevent trilateration attacks, we demonstrate their inefficiency against more elaborate attacks. In fact, we pinpoint Facebook users within 5 meters of their exact location, and 90 of Foursquare users within 15 meters. Our attacks are extremely efficient and complete within 3-7 seconds. The severity of our attacks was acknowledged by Facebook and Foursquare, both of which have followed our recommendations and adopted spatial cloaking to protect their users. Furthermore, our findings have wide implications as numerous popular apps with a massive user base remain vulnerable to this significant threat.",
"This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples' attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats.",
"Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.",
"Sharing of location data enables numerous exciting applications, such as location-based queries, location-based social recommendations, monitoring of traffic and air pollution levels, etc. Disclosing exact user locations raises serious privacy concerns, as locations may give away sensitive information about individuals' health status, alternative lifestyles, political and religious affiliations, etc. Preserving location privacy is an essential requirement towards the successful deployment of location-based applications. These lecture notes provide an overview of the state-of-the-art in location privacy protection. A diverse body of solutions is reviewed, including methods that use location generalization, cryptographic techniques or differential privacy. The most prominent results are discussed, and promising directions for future work are identified. Table of Contents: Introduction Privacy-Preserving Spatial Transformations Cryptographic Approaches Hybrid Approaches Private Matching of Spatial Datasets Trajectory Anonymization Differentially Private Publication of Spatial Datasets Conclusions",
"There is a rich collection of literature that aims at protecting the privacy of users querying location-based services. One of the most popular location privacy techniques consists in cloaking users' locations such that k users appear as potential senders of a query, thus achieving k-anonymity. This paper analyzes the effectiveness of k-anonymity approaches for protecting location privacy in the presence of various types of adversaries. The unraveling of the scheme unfolds the inconsistency between its components, mainly the cloaking mechanism and the k-anonymity metric. We show that constructing cloaking regions based on the users' locations does not reliably relate to location privacy, and argue that this technique may even be detrimental to users' location privacy. The uncovered flaws imply that existing k-anonymity scheme is a tattered cloak for protecting location privacy.",
"",
"",
""
]
} |
1703.00366 | 2952005775 | Information about people's movements and the locations they visit enables an increasing number of mobility analytics applications, e.g., in the context of urban and transportation planning, In this setting, rather than collecting or sharing raw data, entities often use aggregation as a privacy protection mechanism, aiming to hide individual users' location traces. Furthermore, to bound information leakage from the aggregates, they can perturb the input of the aggregation or its output to ensure that these are differentially private. In this paper, we set to evaluate the impact of releasing aggregate location time-series on the privacy of individuals contributing to the aggregation. We introduce a framework allowing us to reason about privacy against an adversary attempting to predict users' locations or recover their mobility patterns. We formalize these attacks as inference problems, and discuss a few strategies to model the adversary's prior knowledge based on the information she may have access to. We then use the framework to quantify the privacy loss stemming from aggregate location data, with and without the protection of differential privacy, using two real-world mobility datasets. We find that aggregates do leak information about individuals' punctual locations and mobility profiles. The density of the observations, as well as timing, play important roles, e.g., regular patterns during peak hours are better protected than sporadic movements. Finally, our evaluation shows that both output and input perturbation offer little additional protection, unless they introduce large amounts of noise ultimately destroying the utility of the data. | Privacy-Preserving Aggregation. There are two main privacy-enhancing strategies to collect location data and compute aggregate time-series. (1) Cryptographic protocols for private aggregation can let a server obtain aggregates without learning users' individual records @cite_24 @cite_20 @cite_18 , but make no consideration about the privacy loss from learning and or releasing exact statistics. We have evaluated this scenario in . (2) Perturbation techniques can be used to hide individual inputs rather than encrypting them. @cite_16 use quadtree spatial decomposition and density based clustering for privately mining location databases, while @cite_17 's framework enables the collection of quantitative visits to sets of locations following a distributed approach. @cite_25 focus on spatial data aggregation in the local setting and propose a framework that allows an untrusted server to learn the user distribution over a spatial domain relying on a personalized count estimation protocol and clustering. As discussed earlier, SpotMe @cite_36 uses an algorithm based on Randomized Response @cite_42 to estimate the number of people in geographic locations. We have evaluated this kind of solutions, specifically, SpotMe @cite_36 , in . | {
"cite_N": [
"@cite_18",
"@cite_36",
"@cite_42",
"@cite_24",
"@cite_16",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2129897549",
"",
"2149921703",
"2076675165",
"2440056311",
"2523313074",
"2134750640"
],
"abstract": [
"",
"Nowadays companies increasingly aggregate location data from different sources on the Internet to offer location-based services such as estimating current road traffic conditions, and finding the best nightlife locations in a city. However, these services have also caused outcries over privacy issues. As the volume of location data being aggregated expands, the comfort of sharing one's whereabouts with the public at large will unavoidably decrease. Existing ways of aggregating location data in the privacy literature are largely centralized in that they rely on a trusted location-based service. Instead, we propose a piece of software (Spot Me) that can run on a mobile phone and is able to estimate the number of people in geographic locations in a privacy-preserving way: accurate estimations are made possible in the presence of privacy-conscious users who report, in addition to their actual locations, a very large number of erroneous locations. The erroneous locations are selected by a randomized response algorithm. We evaluate the accuracy of Spot Me in estimating the number of people upon two very different realistic mobility traces: the mobility of vehicles in urban, suburban and rural areas, and the mobility of subway train passengers in Greater London. We find that erroneous locations have little effect on the estimations (in both traces, the error is below 18 for a situation in which more than 99 of the locations are erroneous), yet they guarantee that users cannot be localized with high probability. Also, the computational and storage overheads for a mobile phone running Spot Me are negligible, and the communication overhead is limited.",
"",
"A significant and growing class of location-based mobile applications aggregate position data from individual devices at a server and compute aggregate statistics over these position streams. Because these devices can be linked to the movement of individuals, there is significant danger that the aggregate computation will violate the location privacy of individuals. This paper develops and evaluates PrivStats, a system for computing aggregate statistics over location data that simultaneously achieves two properties: first, provable guarantees on location privacy even in the face of any side information about users known to the server, and second, privacy-preserving accountability (i.e., protection against abusive clients uploading large amounts of spurious data). PrivStats achieves these properties using a new protocol for uploading and aggregating data anonymously as well as an efficient zero-knowledge proof of knowledge protocol we developed from scratch for accountability. We implemented our system on Nexus One smartphones and commodity servers. Our experimental results demonstrate that PrivStats is a practical system: computing a common aggregate (e.g., count) over the data of 10,000 clients takes less than 0.46 s at the server and the protocol has modest latency (0.6 s) to upload data from a Nexus phone. We also validated our protocols on real driver traces from the CarTel project.",
"One main concern for individuals to participate in the data collection of personal location history records is the disclosure of their location and related information when a user queries for statistical or pattern mining results derived from these records. In this paper, we investigate how the privacy goal that the inclusion of one's location history in a statistical database with location pattern mining capabilities does not substantially increase one's privacy risk. In particular, we propose a differentially private pattern mining algorithm for interesting geographic location discovery using a region quadtree spatial decomposition to preprocess the location points followed by applying a density-based clustering algorithm. A differentially private region quadtree is used for both de-noising the spatial domain and identifying the likely geographic regions containing the interesting locations. Then, a differential privacy mechanism is applied to the algorithm outputs, namely: the interesting regions and their corresponding stay point counts. The quadtree spatial decomposition enables one to obtain a localized reduced sensitivity to achieve the differential privacy goal and accurate outputs. Experimental results on synthetic datasets are used to show the feasibility of the proposed privacy preserving location pattern mining algorithm.",
"With the deep penetration of the Internet and mobile devices, privacy preservation in the local setting has become increasingly relevant. The local setting refers to the scenario where a user is willing to share his her information only if it has been properly sanitized before leaving his her own device. Moreover, a user may hold only a single data element to share, instead of a database. Despite its ubiquitousness, the above constraints make the local setting substantially more challenging than the traditional centralized or distributed settings. In this paper, we initiate the study of private spatial data aggregation in the local setting, which finds its way in many real-world applications, such as Waze and Google Maps. In response to users' varied privacy requirements that are natural in the local setting, we propose a new privacy model called personalized local differential privacy (PLDP) that allows to achieve desirable utility while still providing rigorous privacy guarantees. We design an efficient personalized count estimation protocol as a building block for achieving PLDP and give theoretical analysis of its utility, privacy and complexity. We then present a novel framework that allows an untrusted server to accurately learn the user distribution over a spatial domain while satisfying PLDP for each user. This is mainly achieved by designing a novel user group clustering algorithm tailored to our problem. We confirm the effectiveness and efficiency of our framework through extensive experiments on multiple real benchmark datasets.",
"Location data can be extremely useful to study commuting patterns and disruptions, as well as to predict real-time traffic volumes. At the same time, however, the fine-grained collection of user locations raises serious privacy concerns, as this can reveal sensitive information about the users, such as, life style, political and religious inclinations, or even identities. In this paper, we study the feasibility of crowd-sourced mobility analytics over aggregate location information: users periodically report their location, using a privacy-preserving aggregation protocol, so that the server can only recover aggregates - i.e., how many, but not which, users are in a region at a given time. We experiment with real-world mobility datasets obtained from the Transport For London authority and the San Francisco Cabs network, and present a novel methodology based on time series modeling that is geared to forecast traffic volumes in regions of interest and to detect mobility anomalies in them. In the presence of anomalies, we also make enhanced traffic volume predictions by feeding our model with additional information from correlated regions. Finally, we present and evaluate a mobile app prototype, called Mobility Data Donors (MDD), in terms of computation, communication, and energy overhead, demonstrating the real-world deployability of our techniques.",
"The organization and planning of services (e.g. shopping facilities, infrastructure) requires quantitative information about the number of customers and their frequency of visiting. In this paper we present a framework which enables the collection of quantitative visit information for arbitrary sets of locations in a distributed and privacy-preserving way. While trajectory analysis is typically performed on a central database requiring the transmission of sensitive personal movement information, the main principle of our approach is the local processing of movement data. Only aggregated statistics are transmitted anonymously to a central coordinator, which generates the global statistics. In this paper we present our approach including the methodical background that enables distributed data processing as well as the architecture of the framework."
]
} |
1703.00366 | 2952005775 | Information about people's movements and the locations they visit enables an increasing number of mobility analytics applications, e.g., in the context of urban and transportation planning, In this setting, rather than collecting or sharing raw data, entities often use aggregation as a privacy protection mechanism, aiming to hide individual users' location traces. Furthermore, to bound information leakage from the aggregates, they can perturb the input of the aggregation or its output to ensure that these are differentially private. In this paper, we set to evaluate the impact of releasing aggregate location time-series on the privacy of individuals contributing to the aggregation. We introduce a framework allowing us to reason about privacy against an adversary attempting to predict users' locations or recover their mobility patterns. We formalize these attacks as inference problems, and discuss a few strategies to model the adversary's prior knowledge based on the information she may have access to. We then use the framework to quantify the privacy loss stemming from aggregate location data, with and without the protection of differential privacy, using two real-world mobility datasets. We find that aggregates do leak information about individuals' punctual locations and mobility profiles. The density of the observations, as well as timing, play important roles, e.g., regular patterns during peak hours are better protected than sporadic movements. Finally, our evaluation shows that both output and input perturbation offer little additional protection, unless they introduce large amounts of noise ultimately destroying the utility of the data. | Private Location Data Publishing. @cite_52 use synthetic data generation techniques to publish commuting patterns in a differentially private way, while Acs and Castelluccia @cite_27 describe a differentially private scheme to release the spatio-temporal density of Paris regions using records provided by a telco operator. @cite_53 focus on releasing location entropy for ROIs under differential privacy guarantees: they study the bounds of location entropy and show that @math -differential privacy requires an excessive amount of noise, so they use weaker notions achieving better utility. Besides specific location-oriented private publishing, differential privacy has been proposed as a solution for releasing generic time-series of aggregate statistics. Examples are the various differentially private counting mechanisms by @cite_47 , or 's adaptive system @cite_51 that uses a combination of filtering and sampling to increase the utility of differentially private aggregates. Rastogi and Nath @cite_11 use an algorithm based on Discrete Fourier Transform to privately release aggregate time-series, while @cite_54 combine encryption with data randomization to achieve differential privacy for time-series data. We have evaluated the privacy provided by this approach in , using the schemes in @cite_47 @cite_11 . | {
"cite_N": [
"@cite_53",
"@cite_54",
"@cite_52",
"@cite_27",
"@cite_47",
"@cite_51",
"@cite_11"
],
"mid": [
"",
"2146673169",
"2080044359",
"1993599520",
"2012992615",
"2171283104",
"2104803737"
],
"abstract": [
"",
"A private stream aggregation (PSA) system contributes a user's data to a data aggregator without compromising the user's privacy. The system can begin by determining a private key for a local user in a set of users, wherein the sum of the private keys associated with the set of users and the data aggregator is equal to zero. The system also selects a set of data values associated with the local user. Then, the system encrypts individual data values in the set based in part on the private key to produce a set of encrypted data values, thereby allowing the data aggregator to decrypt an aggregate value across the set of users without decrypting individual data values associated with the set of users, and without interacting with the set of users while decrypting the aggregate value. The system also sends the set of encrypted data values to the data aggregator.",
"In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.",
"With billions of handsets in use worldwide, the quantity of mobility data is gigantic. When aggregated they can help understand complex processes, such as the spread viruses, and built better transportation systems, prevent traffic congestion. While the benefits provided by these datasets are indisputable, they unfortunately pose a considerable threat to location privacy. In this paper, we present a new anonymization scheme to release the spatio-temporal density of Paris, in France, i.e., the number of individuals in 989 different areas of the city released every hour over a whole week. The density is computed from a call-data-record (CDR) dataset, provided by the French Telecom operator Orange, containing the CDR of roughly 2 million users over one week. Our scheme is differential private, and hence, provides provable privacy guarantee to each individual in the dataset. Our main goal with this case study is to show that, even with large dimensional sensitive data, differential privacy can provide practical utility with meaningful privacy guarantee, if the anonymization scheme is carefully designed. This work is part of the national project XData (http: xdata.fr) that aims at combining large (anonymized) datasets provided by different service providers (telecom, electricity, water management, postal service, etc.).",
"We ask the question: how can Web sites and data aggregators continually release updated statistics, and meanwhile preserve each individual user’s privacy? Suppose we are given a stream of 0’s and 1’s. We propose a differentially private continual counter that outputs at every time step the approximate number of 1’s seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow Web sites to continually give top-k and hot items suggestions while preserving users’ privacy.",
"Sharing real-time aggregate statistics of private data has given much benefit to the public to perform data mining for understanding important phenomena, such as Influenza outbreaks and traffic congestion. However, releasing time-series data with standard differential privacy mechanism has limited utility due to high correlation between data values. We propose FAST, an adaptive system to release real-time aggregate statistics under differential privacy with improved utility. To minimize overall privacy cost, FAST adaptively samples long time-series according to detected data dynamics. To improve the accuracy of data release per time stamp, filtering is used to predict data values at non-sampling points and to estimate true values from noisy observations at sampling points. Our experiments with three real data sets confirm that FAST improves the accuracy of time-series release and has excellent performance even under very small privacy cost.",
"We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data. To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of Θ(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from Θ(n) to roughly Θ(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users."
]
} |
1703.00366 | 2952005775 | Information about people's movements and the locations they visit enables an increasing number of mobility analytics applications, e.g., in the context of urban and transportation planning, In this setting, rather than collecting or sharing raw data, entities often use aggregation as a privacy protection mechanism, aiming to hide individual users' location traces. Furthermore, to bound information leakage from the aggregates, they can perturb the input of the aggregation or its output to ensure that these are differentially private. In this paper, we set to evaluate the impact of releasing aggregate location time-series on the privacy of individuals contributing to the aggregation. We introduce a framework allowing us to reason about privacy against an adversary attempting to predict users' locations or recover their mobility patterns. We formalize these attacks as inference problems, and discuss a few strategies to model the adversary's prior knowledge based on the information she may have access to. We then use the framework to quantify the privacy loss stemming from aggregate location data, with and without the protection of differential privacy, using two real-world mobility datasets. We find that aggregates do leak information about individuals' punctual locations and mobility profiles. The density of the observations, as well as timing, play important roles, e.g., regular patterns during peak hours are better protected than sporadic movements. Finally, our evaluation shows that both output and input perturbation offer little additional protection, unless they introduce large amounts of noise ultimately destroying the utility of the data. | Quantifying Location Privacy. Previous work on privacy quantification has studied the privacy loss incurred when disclosing obfuscated traces of individual users, e.g., when using location-based services. The main work in this area is the quantification framework by @cite_31 @cite_38 , which considers a strategic adversary that has prior information about users' mobility patterns, knows the location privacy-protection mechanism they use, and deploys inference attacks based on this information and the observation of the obfuscated traces. | {
"cite_N": [
"@cite_38",
"@cite_31"
],
"mid": [
"2120911939",
"1565026843"
],
"abstract": [
"It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.",
"Mobile users expose their location to potentially untrusted entities by using location-based services. Based on the frequency of location exposure in these applications, we divide them into two main types: Continuous and Sporadic. These two location exposure types lead to different threats. For example, in the continuous case, the adversary can track users over time and space, whereas in the sporadic case, his focus is more on localizing users at certain points in time. We propose a systematic way to quantify users' location privacy by modeling both the location-based applications and the location-privacy preserving mechanisms (LPPMs), and by considering a well-defined adversary model. This framework enables us to customize the LPPMs to the employed location-based application, in order to provide higher location privacy for the users. In this paper, we formalize localization attacks for the case of sporadic location exposure, using Bayesian inference for Hidden Markov Processes. We also quantify user location privacy with respect to the adversaries with two different forms of background knowledge: Those who only know the geographical distribution of users over the considered regions, and those who also know how users move between the regions (i.e., their mobility pattern). Using the Location-Privacy Meter tool, we examine the effectiveness of the following techniques in increasing the expected error of the adversary in the localization attack: Location obfuscation and fake location injection mechanisms for anonymous traces."
]
} |
1703.00366 | 2952005775 | Information about people's movements and the locations they visit enables an increasing number of mobility analytics applications, e.g., in the context of urban and transportation planning, In this setting, rather than collecting or sharing raw data, entities often use aggregation as a privacy protection mechanism, aiming to hide individual users' location traces. Furthermore, to bound information leakage from the aggregates, they can perturb the input of the aggregation or its output to ensure that these are differentially private. In this paper, we set to evaluate the impact of releasing aggregate location time-series on the privacy of individuals contributing to the aggregation. We introduce a framework allowing us to reason about privacy against an adversary attempting to predict users' locations or recover their mobility patterns. We formalize these attacks as inference problems, and discuss a few strategies to model the adversary's prior knowledge based on the information she may have access to. We then use the framework to quantify the privacy loss stemming from aggregate location data, with and without the protection of differential privacy, using two real-world mobility datasets. We find that aggregates do leak information about individuals' punctual locations and mobility profiles. The density of the observations, as well as timing, play important roles, e.g., regular patterns during peak hours are better protected than sporadic movements. Finally, our evaluation shows that both output and input perturbation offer little additional protection, unless they introduce large amounts of noise ultimately destroying the utility of the data. | This framework is conceived for evaluating privacy-preserving mechanisms applied to individuals' traces, therefore, the techniques used in their work are not applicable in the context of location privacy-preserving mechanisms based on aggregation. Nonetheless, if we were to compare our framework to 's, we would observe that it does not only differ in the modeling of the adversary's prior knowledge, observation, and goal, but it is also driven by the definition of new metrics to model the adversary's error in this scenario. Moreover, we introduce new inference attacks tailored to the aggregate scenario and evaluate the impact on privacy of: (i) priors of different nature -- specifically, both assignment and probabilistic, while only probabilistic are considered in @cite_31 @cite_38 , (ii) priors based on more or less complete information, and (iii) sparsity of the location data that should be protected. | {
"cite_N": [
"@cite_38",
"@cite_31"
],
"mid": [
"2120911939",
"1565026843"
],
"abstract": [
"It is a well-known fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs, it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his location-inference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of location-information disclosure attacks. Thus, by formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary's inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users' location privacy. We rely on well-established statistical methods to formalize and implement the attacks in a tool: the Location-Privacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and k-anonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users' actual locations.",
"Mobile users expose their location to potentially untrusted entities by using location-based services. Based on the frequency of location exposure in these applications, we divide them into two main types: Continuous and Sporadic. These two location exposure types lead to different threats. For example, in the continuous case, the adversary can track users over time and space, whereas in the sporadic case, his focus is more on localizing users at certain points in time. We propose a systematic way to quantify users' location privacy by modeling both the location-based applications and the location-privacy preserving mechanisms (LPPMs), and by considering a well-defined adversary model. This framework enables us to customize the LPPMs to the employed location-based application, in order to provide higher location privacy for the users. In this paper, we formalize localization attacks for the case of sporadic location exposure, using Bayesian inference for Hidden Markov Processes. We also quantify user location privacy with respect to the adversaries with two different forms of background knowledge: Those who only know the geographical distribution of users over the considered regions, and those who also know how users move between the regions (i.e., their mobility pattern). Using the Location-Privacy Meter tool, we examine the effectiveness of the following techniques in increasing the expected error of the adversary in the localization attack: Location obfuscation and fake location injection mechanisms for anonymous traces."
]
} |
1702.08681 | 2952474943 | Multi-instance multi-label (MIML) learning has many interesting applications in computer visions, including multi-object recognition and automatic image tagging. In these applications, additional information such as bounding-boxes, image captions and descriptions is often available during training phrase, which is referred as privileged information (PI). However, as existing works on learning using PI only consider instance-level PI (privileged instances), they fail to make use of bag-level PI (privileged bags) available in MIML learning. Therefore, in this paper, we propose a two-stream fully convolutional network, named MIML-FCN+, unified by a novel PI loss to solve the problem of MIML learning with privileged bags. Compared to the previous works on PI, the proposed MIML-FCN+ utilizes the readily available privileged bags, instead of hard-to-obtain privileged instances, making the system more general and practical in real world applications. As the proposed PI loss is convex and SGD compatible and the framework itself is a fully convolutional network, MIML-FCN+ can be easily integrated with state of-the-art deep learning networks. Moreover, the flexibility of convolutional layers allows us to exploit structured correlations among instances to facilitate more effective training and testing. Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed MIML-FCN+, outperforming state-of-the-art methods in the application of multi-object recognition. | : During the past decade, many MIML algorithms have been proposed @cite_3 @cite_35 @cite_4 @cite_25 @cite_21 . For example, MIMLSVM @cite_35 degenerates the MIML problem into solving the single-instance multi-label problem while MIMLBoost @cite_35 degenerates MIML into multi-instance single-label learning, which suggest that MIML is closely related to both multi-instance learning and multi-label learning. Ranking loss had been shown to be effective in multi-label learning, and thus @cite_9 proposed to optimize ranking loss for MIML instance annotation. In terms of generative methods, @cite_1 proposed a Dirichlet-Bernoulli alignment based model for MIML learning problem. In contrast, in this work we consider using privileged information to help MIML learning under the deep learning paradigam, which has not been explored before. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_25"
],
"mid": [
"2135533176",
"",
"2137917285",
"2293549417",
"2130747910",
"2111478152",
"2154840533"
],
"abstract": [
"In this paper, we formalize multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels. Such a problem can occur in many real-world tasks, e.g. an image usually contains multiple patches each of which can be described by a feature vector, and the image can belong to multiple categories since its semantics can be recognized in different ways. We analyze the relationship between multi-instance multi-label learning and the learning frameworks of traditional supervised learning, multi-instance learning and multi-label learning. Then, we propose the MIMLBOOST and MIMLSVM algorithms which achieve good performance in an application to scene classification.",
"",
"Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen bags. We instead consider the problem of predicting instance labels while learning from data labeled only at the bag level. We propose Rank-Loss Support Instance Machines, which optimize a regularized rank-loss objective and can be instantiated with different aggregation models connecting instance-level predictions with bag-level predictions. The aggregation models that we consider are equivalent to defining a \"support instance\" for each bag, which allows efficient optimization of the rank-loss objective using primal sub-gradient descent. Experiments on artificial and real-world datasets show that the proposed methods achieve higher accuracy than other loss functions used in prior work, e.g., Hamming loss, and recent work in ambiguous label classification.",
"This paper studies the problem of image annotation in a multi-modal setting where both visual and textual information are available. We propose Multimodal Multi-instance Multi-label Latent Dirichlet Allocation (M3LDA), where the model consists of a visual-label part, a textual-label part and a label-topic part. The basic idea is that the topic decided by the visual information and the topic decided by the textual information should be consistent, leading to the correct label assignment. Particularly, M3LDA is able to annotate image regions, thus provides a promising way to understand the relation between input patterns and output semantics. Experiments on Corel5K and ImageCLEF validate the effectiveness of the proposed method.",
"We propose Dirichlet-Bernoulli Alignment (DBA), a generative model for corpora in which each pattern (e.g., a document) contains a set of instances (e.g., paragraphs in the document) and belongs to multiple classes. By casting predefined classes as latent Dirichlet variables (i.e., instance level labels), and modeling the multi-label of each pattern as Bernoulli variables conditioned on the weighted empirical average of topic assignments, DBA automatically aligns the latent topics discovered from data to human-defined classes. DBA is useful for both pattern classification and instance disambiguation, which are tested on text classification and named entity disambiguation in web search queries respectively.",
"In many real world applications we do not have access to fully-labeled training data, but only to a list of possible labels. This is the case, e.g., when learning visual classifiers from images downloaded from the web, using just their text captions or tags as learning oracles. In general, these problems can be very difficult. However most of the time there exist different implicit sources of information, coming from the relations between instances and labels, which are usually dismissed. In this paper, we propose a semi-supervised framework to model this kind of problems. Each training sample is a bag containing multi-instances, associated with a set of candidate labeling vectors. Each labeling vector encodes the possible labels for the instances in the bag, with only one being fully correct. The use of the labeling vectors provides a principled way not to exclude any information. We propose a large margin discriminative formulation, and an efficient algorithm to solve it. Experiments conducted on artificial datasets and a real-world images and captions dataset show that our approach achieves performance comparable to an SVM trained with the ground-truth labels, and outperforms other baselines.",
"In this paper, we address the problem of multi-instance multi-label learning (MIML) where each example is associated with not only multiple instances but also multiple class labels. In our novel approach, given an MIML example, each instance in the example is only associated with a single label and the label set of the example is the aggregation of all instance labels. Many real-world tasks such as scene classification, text categorization and gene sequence encoding can be properly formalized under our proposed approach. We formulate our MIML problem as a combination of two optimizations: (1) a quadratic programming (QP) that minimizes the empirical risk with L2-norm regularization, and (2) an integer programing (IP) assigning each instance to a single label. We also present an efficient method combining the stochastic gradient decent and alternating optimization approaches to solve our QP and IP optimizations. In our experiments with both an artificially generated data set and real-world applications, i.e. scene classification and text categorization, our proposed method achieves superior performance over existing state-of-the-art MIML methods such as MIMLBOOST, MIMLSVM, M @math MIML and MIMLRBF."
]
} |
1702.08681 | 2952474943 | Multi-instance multi-label (MIML) learning has many interesting applications in computer visions, including multi-object recognition and automatic image tagging. In these applications, additional information such as bounding-boxes, image captions and descriptions is often available during training phrase, which is referred as privileged information (PI). However, as existing works on learning using PI only consider instance-level PI (privileged instances), they fail to make use of bag-level PI (privileged bags) available in MIML learning. Therefore, in this paper, we propose a two-stream fully convolutional network, named MIML-FCN+, unified by a novel PI loss to solve the problem of MIML learning with privileged bags. Compared to the previous works on PI, the proposed MIML-FCN+ utilizes the readily available privileged bags, instead of hard-to-obtain privileged instances, making the system more general and practical in real world applications. As the proposed PI loss is convex and SGD compatible and the framework itself is a fully convolutional network, MIML-FCN+ can be easily integrated with state of-the-art deep learning networks. Moreover, the flexibility of convolutional layers allows us to exploit structured correlations among instances to facilitate more effective training and testing. Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed MIML-FCN+, outperforming state-of-the-art methods in the application of multi-object recognition. | Many computer vision applications such as scene classification, multi-object recognition, image tagging, and action recognition, can be formulated as MIML problelms. For instance, @cite_34 proposed a hidden conditional random field model for MIML image annotation. @cite_35 applied MIML learning for scene classification. Several works @cite_29 @cite_14 @cite_13 also implicitly exploited the MIML nature of multi-object recognition problem. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_29",
"@cite_34",
"@cite_13"
],
"mid": [
"2135533176",
"2410641892",
"1828658979",
"",
"2101611867"
],
"abstract": [
"In this paper, we formalize multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels. Such a problem can occur in many real-world tasks, e.g. an image usually contains multiple patches each of which can be described by a feature vector, and the image can belong to multiple categories since its semantics can be recognized in different ways. We analyze the relationship between multi-instance multi-label learning and the learning frameworks of traditional supervised learning, multi-instance learning and multi-label learning. Then, we propose the MIMLBOOST and MIMLSVM algorithms which achieve good performance in an application to scene classification.",
"Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.",
"Successful visual object recognition methods typically rely on training datasets containing lots of richly annotated images. Annotating object bounding boxes is both expensive and subjective. We describe a weakly supervised convolutional neural network (CNN) for object recognition that does not rely on detailed object annotation and yet returns 86.3 mAP on the Pascal VOC classification task, outperforming previous fully-supervised systems by a sizable margin. Despite the lack of bounding box supervision, the network produces maps that clearly localize the objects in cluttered scenes. We also show that adding fully supervised object examples to our weakly supervised setup does not increase the classification performance.",
"",
"Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification. Trained as an image classifier, the architecture implicitly learns object detectors that are better than alternative weakly supervised detection systems on the PASCAL VOC data. The model, which is a simple and elegant end-to-end architecture, outperforms standard data augmentation and fine-tuning techniques for the task of image-level classification as well."
]
} |
1702.08681 | 2952474943 | Multi-instance multi-label (MIML) learning has many interesting applications in computer visions, including multi-object recognition and automatic image tagging. In these applications, additional information such as bounding-boxes, image captions and descriptions is often available during training phrase, which is referred as privileged information (PI). However, as existing works on learning using PI only consider instance-level PI (privileged instances), they fail to make use of bag-level PI (privileged bags) available in MIML learning. Therefore, in this paper, we propose a two-stream fully convolutional network, named MIML-FCN+, unified by a novel PI loss to solve the problem of MIML learning with privileged bags. Compared to the previous works on PI, the proposed MIML-FCN+ utilizes the readily available privileged bags, instead of hard-to-obtain privileged instances, making the system more general and practical in real world applications. As the proposed PI loss is convex and SGD compatible and the framework itself is a fully convolutional network, MIML-FCN+ can be easily integrated with state of-the-art deep learning networks. Moreover, the flexibility of convolutional layers allows us to exploit structured correlations among instances to facilitate more effective training and testing. Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed MIML-FCN+, outperforming state-of-the-art methods in the application of multi-object recognition. | LUPI assumes there are additional data available during training, i.e. privileged information (PI), which are not available in testing. Vapnik and Vashist @cite_7 proposed an SVM+ formulation that exploits PI as slack variables during training to teach" students to learn better classification model. The idea was later developed into two schemes: similarity control and knowledge transfer @cite_17 . LUPI has also been utilized in metric learning @cite_16 , learning to rank @cite_32 and multi-instance learning @cite_22 . A few works have applied PI to computer vision applications. For example, @cite_22 applied PI for web images recognition. @cite_32 applied PI for image ranking and retrieval. However, most of the existing PI works consider only instance-level PI, are still based on SVM+ formulation, which is hard to be incorporated into a deep learning framework in an end-to-end fashion. In this work, we address all these limitations by a two-stream fully convolutional network and a new PI loss. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_32",
"@cite_16",
"@cite_17"
],
"mid": [
"385190221",
"2506404347",
"2126942721",
"2136331731",
"2173379916"
],
"abstract": [
"Relevant and irrelevant web images collected by tag-based image retrieval have been employed as loosely labeled training data for learning SVM classifiers for image categorization by only using the visual features. In this work, we propose a new image categorization method by incorporating the textual features extracted from the surrounding textual descriptions (tags, captions, categories, etc.) as privileged information and simultaneously coping with noise in the loose labels of training web images. When the training and test samples come from different datasets, our proposed method can be further extended to reduce the data distribution mismatch by adding a regularizer based on the Maximum Mean Discrepancy (MMD) criterion. Our comprehensive experiments on three benchmark datasets demonstrate the effectiveness of our proposed methods for image categorization and image retrieval by exploiting privileged information from web data.",
"In the Afterword to the second edition of the book \"Estimation of Dependences Based on Empirical Data\" by V. Vapnik, an advanced learning paradigm called Learning Using Hidden Information (LUHI) was introduced. This Afterword also suggested an extension of the SVM method (the so called SVM γ + method) to implement algorithms which address the LUHI paradigm (Vapnik, 1982-2006, Sections 2.4.2 and 2.5.3 of the Afterword). See also (Vapnik, Vashist, & Pavlovitch, 2008, 2009) for further development of the algorithms. In contrast to the existing machine learning paradigm where a teacher does not play an important role, the advanced learning paradigm considers some elements of human teaching. In the new paradigm along with examples, a teacher can provide students with hidden information that exists in explanations, comments, comparisons, and so on. This paper discusses details of the new paradigm 1 and corresponding algorithms, introduces some new algorithms, considers several specific forms of privileged information, demonstrates superiority of the new learning paradigm over the classical learning paradigm when solving practical problems, and discusses general questions related to the new ideas.",
"Many computer vision problems have an asymmetric distribution of information between training and test time. In this work, we study the case where we are given additional information about the training data, which however will not be available at test time. This situation is called learning using privileged information (LUPI). We introduce two maximum-margin techniques that are able to make use of this additional source of information, and we show that the framework is applicable to several scenarios that have been studied in computer vision before. Experiments with attributes, bounding boxes, image tags and rationales as additional information in object classification show promising results.",
"In some pattern analysis problems, there exists expert knowledge, in addition to the original data involved in the classification process. The vast majority of existing approaches simply ignore such auxiliary (privileged) knowledge. Recently a new paradigm-learning using privileged information-was introduced in the framework of SVM+. This approach is formulated for binary classification and, as typical for many kernel-based methods, can scale unfavorably with the number of training examples. While speeding up training methods and extensions of SVM+ to multiclass problems are possible, in this paper we present a more direct novel methodology for incorporating valuable privileged knowledge in the model construction phase, primarily formulated in the framework of generalized matrix learning vector quantization. This is done by changing the global metric in the input space, based on distance relations revealed by the privileged information. Hence, unlike in SVM+, any convenient classifier can be used after such metric modification, bringing more flexibility to the problem of incorporating privileged information during the training. Experiments demonstrate that the manipulation of an input space metric based on privileged data improves classification accuracy. Moreover, our methods can achieve competitive performance against the SVM+ formulations.",
"This paper describes a new paradigm of machine learning, in which Intelligent Teacher is involved. During training stage, Intelligent Teacher provides Student with information that contains, along with classification of each example, additional privileged information (for example, explanation) of this example. The paper describes two mechanisms that can be used for significantly accelerating the speed of Student's learning using privileged information: (1) correction of Student's concepts of similarity between examples, and (2) direct Teacher-Student knowledge transfer."
]
} |
1702.08634 | 2951493246 | We introduce a novel semi-supervised video segmentation approach based on an efficient video representation, called as "super-trajectory". Each super-trajectory corresponds to a group of compact trajectories that exhibit consistent motion patterns, similar appearance and close spatiotemporal relationships. We generate trajectories using a probabilistic model, which handles occlusions and drifts in a robust and natural way. To reliably group trajectories, we adopt a modified version of the density peaks based clustering algorithm that allows capturing rich spatiotemporal relations among trajectories in the clustering process. The presented video representation is discriminative enough to accurately propagate the initial annotations in the first frame onto the remaining video frames. Extensive experimental analysis on challenging benchmarks demonstrate our method is capable of distinguishing the target objects from complex backgrounds and even reidentifying them after long-term occlusions. | Unsupervised algorithms @cite_41 @cite_47 @cite_50 @cite_30 @cite_31 do not require manual annotations but often rely on certain limiting assumptions about the application scenario. Some techniques @cite_5 @cite_19 @cite_24 emphasize the importance of motion information. More specially, @cite_5 @cite_19 analyze long-term motion information via trajectories, then solve the segmentation as a trajectory clustering problem. The works @cite_25 @cite_4 @cite_38 introduce saliency information @cite_29 as prior knowledge to infer the object. Recently, @cite_45 @cite_21 @cite_17 @cite_22 @cite_14 generate object segments via ranking several object candidates. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_31",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_41",
"@cite_29",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_45",
"@cite_50",
"@cite_5",
"@cite_47",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2585592883",
"1586994488",
"2467181293",
"1954128991",
"2950249112",
"",
"2520274358",
"2156252543",
"2113708607",
"2167331599",
"1989348325",
"",
"1496571393",
"159595522",
"2402395722",
"2155598147"
],
"abstract": [
"",
"Video saliency, aiming for estimation of a single dominant object in a sequence, offers strong object-level cues for unsupervised video object segmentation. In this paper, we present a geodesic distance based technique that provides reliable and temporally consistent saliency measurement of superpixels as a prior for pixel-wise labeling. Using undirected intra-frame and inter-frame graphs constructed from spatiotemporal edges or appearance and motion, and a skeleton abstraction step to further enhance saliency estimates, our method formulates the pixel-wise segmentation task as an energy minimization problem on a function that consists of unary terms of global foreground and background models, dynamic location models, and pairwise terms of label smoothness potentials. We perform extensive quantitative and qualitative experiments on benchmark datasets. Our method achieves superior performance in comparison to the current state-of-the-art in terms of accuracy and speed.",
"With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).",
"We present an unsupervised approach that generates a diverse, ranked set of bounding box and segmentation video object proposals—spatio-temporal tubes that localize the foreground objects—in an unannotated video. In contrast to previous unsupervised methods that either track regions initialized in an arbitrary frame or train a fixed model over a cluster of regions, we instead discover a set of easy-togroup instances of an object and then iteratively update its appearance model to gradually detect harder instances in temporally-adjacent frames. Our method first generates a set of spatio-temporal bounding box proposals, and then refines them to obtain pixel-wise segmentation proposals. We demonstrate state-of-the-art segmentation results on the SegTrack v2 dataset, and bounding box tracking results that perform competitively to state-of-the-art supervised tracking methods.",
"We introduce an unsupervised, geodesic distance based, salient video object segmentation method. Unlike traditional methods, our method incorporates saliency as prior for object via the computation of robust geodesic measurement. We consider two discriminative visual features: spatial edges and temporal motion boundaries as indicators of foreground object locations. We first generate framewise spatiotemporal saliency maps using geodesic distance from these indicators. Building on the observation that foreground areas are surrounded by the regions with high spatiotemporal edge values, geodesic distance provides an initial estimation for foreground and background. Then, high-quality saliency results are produced via the geodesic distances to background regions in the subsequent frames. Through the resulting saliency maps, we build global appearance models for foreground and background. By imposing motion continuity, we establish a dynamic location model for each frame. Finally, the spatiotemporal saliency maps, appearance models and dynamic location models are combined into an energy minimization framework to attain both spatially and temporally coherent object segmentation. Extensive quantitative and qualitative experiments on benchmark video dataset demonstrate the superiority of the proposed method over the state-of-the-art algorithms.",
"We segment moving objects in videos by ranking spatio-temporal segment proposals according to \"moving objectness\": how likely they are to contain a moving object. In each video frame, we compute segment proposals using multiple figure-ground segmentations on per frame motion boundaries. We rank them with a Moving Objectness Detector trained on image and motion fields to detect moving objects and discard over under segmentations or background parts of the scene. We extend the top ranked segments into spatio-temporal tubes using random walkers on motion affinities of dense point trajectories. Our final tube ranking consistently outperforms previous segmentation methods in the two largest video segmentation benchmarks currently available, for any number of proposals. Further, our per frame moving object proposals increase the detection rate up to 7 over previous state-of-the-art static proposal methods.",
"",
"In this paper, we show that large annotated data sets have great potential to provide strong priors for saliency estimation rather than merely serving for benchmark evaluations. To this end, we present a novel image saliency detection method called saliency transfer. Given an input image, we first retrieve a support set of best matches from the large database of saliency annotated images. Then, we assign the transitional saliency scores by warping the support set annotations onto the input image according to computed dense correspondences. To incorporate context, we employ two complementary correspondence strategies: a global matching scheme based on scene-level analysis and a local matching scheme based on patch-level inference. We then introduce two refinement measures to further refine the saliency maps and apply the random-walk-with-restart by exploring the global saliency structure to estimate the affinity between foreground and background assignments. Extensive experimental results on four publicly available benchmark data sets demonstrate that the proposed saliency algorithm consistently outperforms the current state-of-the-art methods.",
"In this paper, we address the problem of video object segmentation, which is to automatically identify the primary object and segment the object out in every frame. We propose a novel formulation of selecting object region candidates simultaneously in all frames as finding a maximum weight clique in a weighted region graph. The selected regions are expected to have high objectness score (unary potential) as well as share similar appearance (binary potential). Since both unary and binary potentials are unreliable, we introduce two types of mutex (mutual exclusion) constraints on regions in the same clique: intra-frame and inter-frame constraints. Both types of constraints are expressed in a single quadratic form. We propose a novel algorithm to compute the maximal weight cliques that satisfy the constraints. We apply our method to challenging benchmark videos and obtain very competitive results that outperform state-of-the-art methods.",
"We present a technique for separating foreground objects from the background in a video. Our method is fast, fully automatic, and makes minimal assumptions about the video. This enables handling essentially unconstrained settings, including rapidly moving background, arbitrary object motion and appearance, and non-rigid deformations and articulations. In experiments on two datasets containing over 1400 video shots, our method outperforms a state-of-the-art background subtraction technique [4] as well as methods based on clustering point tracks [6, 18, 19]. Moreover, it performs comparably to recent video object segmentation methods based on object proposals [14, 16, 27], while being orders of magnitude faster.",
"Our goal is to segment a video sequence into moving objects and the world scene. In recent work, spectral embedding of point trajectories based on 2D motion cues accumulated from their lifespans, has shown to outperform factorization and per frame segmentation methods for video segmentation. The scale and kinematic nature of the moving objects and the background scene determine how close or far apart trajectories are placed in the spectral embedding. Such density variations may confuse clustering algorithms, causing over-fragmentation of object interiors. Therefore, instead of clustering in the spectral embedding, we propose detecting discontinuities of embedding density between spatially neighboring trajectories. Detected discontinuities are strong indicators of object boundaries and thus valuable for video segmentation. We propose a novel embedding discretization process that recovers from over-fragmentations by merging clusters according to discontinuity evidence along inter-cluster boundaries. For segmenting articulated objects, we combine motion grouping cues with a center-surround saliency operation, resulting in “context-aware”, spatially coherent, saliency maps. Figure-ground segmentation obtained from saliency thresholding, provides object connectedness constraints that alter motion based trajectory affinities, by keeping articulated parts together and separating disconnected in time objects. Finally, we introduce Gabriel graphs as effective per frame superpixel maps for converting trajectory clustering to dense image segmentation. Gabriel edges bridge large contour gaps via geometric reasoning without over-segmenting coherent image regions. We present experimental results of our method that outperform the state-of-the-art in challenging motion segmentation datasets.",
"We present an approach to discover and segment foreground object(s) in video. Given an unannotated video sequence, the method first identifies object-like regions in any frame according to both static and dynamic cues. We then compute a series of binary partitions among those candidate “key-segments” to discover hypothesis groups with persistent appearance and motion. Finally, using each ranked hypothesis in turn, we estimate a pixel-level object labeling across all frames, where (a) the foreground likelihood depends on both the hypothesis's appearance as well as a novel localization prior based on partial shape matching, and (b) the background likelihood depends on cues pulled from the key-segments' (possibly diverse) surroundings observed across the sequence. Compared to existing methods, our approach automatically focuses on the persistent foreground regions of interest while resisting oversegmentation. We apply our method to challenging benchmark videos, and show competitive or better results than the state-of-the-art.",
"",
"Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottom-up segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pair-wise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multi-body factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting.",
"The use of video segmentation as an early processing step in video analysis lags behind the use of image segmentation for image analysis, despite many available video segmentation methods. A major reason for this lag is simply that videos are an order of magnitude bigger than images; yet most methods require all voxels in the video to be loaded into memory, which is clearly prohibitive for even medium length videos. We address this limitation by proposing an approximation framework for streaming hierarchical video segmentation motivated by data stream algorithms: each video frame is processed only once and does not change the segmentation of previous frames. We implement the graph-based hierarchical segmentation method within our streaming framework; our method is the first streaming hierarchical video segmentation method proposed. We perform thorough experimental analysis on a benchmark video data set and longer videos. Our results indicate the graph-based streaming hierarchical method outperforms other streaming video segmentation methods and performs nearly as well as the full-video hierarchical graph-based method.",
"We address the problem of Foreground Background segmentation of “unconstrained” video. By “unconstrained” we mean that the moving objects and the background scene may be highly non-rigid (e.g., waves in the sea); the camera may undergo a complex motion with 3D parallax; moving objects may suffer from motion blur, large scale and illumination changes, etc. Most existing segmentation methods fail on such unconstrained videos, especially in the presence of highly non-rigid motion and low resolution. We propose a computationally efficient algorithm which is able to produce accurate results on a large variety of unconstrained videos. This is obtained by casting the video segmentation problem as a voting scheme on the graph of similar (‘re-occurring’) regions in the video sequence. We start from crude saliency votes at each pixel, and iteratively correct those votes by ‘consensus voting’ of re-occurring regions across the video sequence. The power of our consensus voting comes from the non-locality of the region re-occurrence, both in space and in time – enabling propagation of diverse and rich information across the entire video sequence. Qualitative and quantitative experiments indicate that our approach outperforms current state-of-the-art methods.",
"In this paper, we propose a novel approach to extract primary object segments in videos in the object proposal' domain. The extracted primary object regions are then used to build object models for optimized video segmentation. The proposed approach has several contributions: First, a novel layered Directed Acyclic Graph (DAG) based framework is presented for detection and segmentation of the primary object in video. We exploit the fact that, in general, objects are spatially cohesive and characterized by locally smooth motion trajectories, to extract the primary object from the set of all available proposals based on motion, appearance and predicted-shape similarity across frames. Second, the DAG is initialized with an enhanced object proposal set where motion based proposal predictions (from adjacent frames) are used to expand the set of object proposals for a particular frame. Last, the paper presents a motion scoring function for selection of object proposals that emphasizes high optical flow gradients at proposal boundaries to discriminate between moving objects and the background. The proposed approach is evaluated using several challenging benchmark videos and it outperforms both unsupervised and supervised state-of-the-art methods."
]
} |
1702.08634 | 2951493246 | We introduce a novel semi-supervised video segmentation approach based on an efficient video representation, called as "super-trajectory". Each super-trajectory corresponds to a group of compact trajectories that exhibit consistent motion patterns, similar appearance and close spatiotemporal relationships. We generate trajectories using a probabilistic model, which handles occlusions and drifts in a robust and natural way. To reliably group trajectories, we adopt a modified version of the density peaks based clustering algorithm that allows capturing rich spatiotemporal relations among trajectories in the clustering process. The presented video representation is discriminative enough to accurately propagate the initial annotations in the first frame onto the remaining video frames. Extensive experimental analysis on challenging benchmarks demonstrate our method is capable of distinguishing the target objects from complex backgrounds and even reidentifying them after long-term occlusions. | Semi-supervised video segmentation, which also refers to , is usually achieved via propagating human annotation specified on one or a few key-frames onto the entire video sequence @cite_33 @cite_16 @cite_46 @cite_15 @cite_3 @cite_2 @cite_48 @cite_26 . These methods mainly use flow-based random field propagation models @cite_23 , patch-seams based propagation strategies @cite_49 , energy optimizations over graph models @cite_28 , joint segmentation and detection frameworks @cite_43 , or pixel segmentation on bilateral space @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_33",
"@cite_15",
"@cite_28",
"@cite_48",
"@cite_3",
"@cite_43",
"@cite_23",
"@cite_2",
"@cite_49",
"@cite_46",
"@cite_16"
],
"mid": [
"2463175074",
"",
"2163747463",
"",
"2212077366",
"2200599981",
"",
"1904248166",
"1560354729",
"1948751323",
"2009874829",
"2034740917",
"2011953904"
],
"abstract": [
"In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.",
"",
"This paper presents an approach to unsupervised segmentation of moving and static objects occurring in a video. Objects are, in general, spatially cohesive and characterized by locally smooth motion trajectories. Therefore, they occupy regions within each frame, while the shape and location of these regions vary slowly from frame to frame. Thus, video segmentation can be done by tracking regions across the frames such that the resulting tracks are locally smooth. To this end, we use a low-level segmentation to extract regions in all frames, and then we transitively match and cluster the similar regions across the video. The similarity is defined with respect to the region photometric, geometric, and motion properties. We formulate a new circular dynamic-time warping (CDTW) algorithm that generalizes DTW to match closed boundaries of two regions, without compromising DTW's guarantees of achieving the optimal solution with linear complexity. Our quantitative evaluation and comparison with the state of the art suggest that the proposed approach is a competitive alternative to currently prevailing point-based methods.",
"",
"We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"As the use of videos is becoming more popular in computer vision, the need for annotated video datasets increases. Such datasets are required either as training data or simply as ground truth for benchmark datasets. A particular challenge in video segmentation is due to disocclusions, which hamper frame-to-frame propagation, in conjunction with non-moving objects. We show that a combination of motion from point trajectories, as known from motion segmentation, along with minimal supervision can largely help solve this problem. Moreover, we integrate a new constraint that enforces consistency of the color distribution in successive frames. We quantify user interaction effort with respect to segmentation quality on challenging ego motion videos. We compare our approach to a diverse set of algorithms in terms of user effort and in terms of performance on common video segmentation benchmarks.",
"",
"We present a novel Joint Online Tracking and Segmentation (JOTS) algorithm which integrates the multi-part tracking and segmentation into a unified energy optimization framework to handle the video segmentation task. The multi-part segmentation is posed as a pixel-level label assignment task with regularization according to the estimated part models, and tracking is formulated as estimating the part models based on the pixel labels, which in turn is used to refine the model. The multi-part tracking and segmentation are carried out iteratively to minimize the proposed objective function by a RANSAC-style approach. Extensive experiments on the SegTrack and SegTrack v2 databases demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods.",
"Manually segmenting and labeling objects in video sequences is quite tedious, yet such annotations are valuable for learning-based approaches to object and activity recognition. While automatic label propagation can help, existing methods simply propagate annotations from arbitrarily selected frames (e.g., the first one) and so may fail to best leverage the human effort invested. We define an active frame selection problem: select k frames for manual labeling, such that automatic pixel-level label propagation can proceed with minimal expected error. We propose a solution that directly ties a joint frame selection criterion to the predicted errors of a flow-based random field propagation model. It selects the set of k frames that together minimize the total mislabeling risk over the entire sequence. We derive an efficient dynamic programming solution to optimize the criterion. Further, we show how to automatically determine how many total frames k should be labeled in order to minimize the total manual effort spent labeling and correcting propagation errors. We demonstrate our method's clear advantages over several baselines, saving hours of human effort per video.",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.",
"In this paper, we propose a technique for video object segmentation using patch seams across frames. Typically, seams, which are connected paths of low energy, are utilised for retargeting, where the primary aim is to reduce the image size while preserving the salient image contents. Here, we adapt the formulation of seams for temporal label propagation. The energy function associated with the proposed video seams provides temporal linking of patches across frames, to accurately segment the object. The proposed energy function takes into account the similarity of patches along the seam, temporal consistency of motion and spatial coherency of seams. Label propagation is achieved with high fidelity in the critical boundary regions, utilising the proposed patch seams. To achieve this without additional overheads, we curtail the error propagation by formulating boundary regions as rough-sets. The proposed approach out-perform state-of-the-art supervised and unsupervised algorithms, on benchmark datasets.",
"",
"This paper proposes a probabilistic graphical model for the problem of propagating labels in video sequences, also termed the label propagation problem. Given a limited amount of hand labelled pixels, typically the start and end frames of a chunk of video, an EM based algorithm propagates labels through the rest of the frames of the video sequence. As a result, the user obtains pixelwise labelled video sequences along with the class probabilities at each pixel. Our novel algorithm provides an essential tool to reduce tedious hand labelling of video sequences, thus producing copious amounts of useable ground truth data. A novel application of this algorithm is in semi-supervised learning of discriminative classifiers for video segmentation and scene parsing. The label propagation scheme can be based on pixel-wise correspondences obtained from motion estimation, image patch based similarities as seen in epitomic models or even the more recent, semantically consistent hierarchical regions. We compare the abilities of each of these variants, both via quantitative and qualitative studies against ground truth data. We then report studies on a state of the art Random forest classifier based video segmentation scheme, trained using fully ground truth data and with data obtained from label propagation. The results of this study strongly support and encourage the use of the proposed label propagation algorithm."
]
} |
1702.08634 | 2951493246 | We introduce a novel semi-supervised video segmentation approach based on an efficient video representation, called as "super-trajectory". Each super-trajectory corresponds to a group of compact trajectories that exhibit consistent motion patterns, similar appearance and close spatiotemporal relationships. We generate trajectories using a probabilistic model, which handles occlusions and drifts in a robust and natural way. To reliably group trajectories, we adopt a modified version of the density peaks based clustering algorithm that allows capturing rich spatiotemporal relations among trajectories in the clustering process. The presented video representation is discriminative enough to accurately propagate the initial annotations in the first frame onto the remaining video frames. Extensive experimental analysis on challenging benchmarks demonstrate our method is capable of distinguishing the target objects from complex backgrounds and even reidentifying them after long-term occlusions. | Point trajectories are generated through tracking points over multiple frames and have the advantage of representing long-term motion information. Kanade-Lucas-Tomasi (KLT) @cite_39 is among the most popular methods that track a small amount of feature points. Inspiring several follow-up studies in video segmentation and action recognition, optical flow based dense trajectories @cite_34 improve over sparse interest point tracking. In particular, @cite_12 @cite_35 @cite_9 introduce dense trajectories for action recognition. Other methods @cite_5 @cite_36 @cite_20 @cite_27 @cite_19 @cite_44 @cite_11 @cite_42 @cite_37 address the problem of unsupervised video segmentation, in which case the problem also be described as . These methods usually track points via dense optical flow and perform segmentation via clustering trajectories. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_36",
"@cite_9",
"@cite_42",
"@cite_39",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_34",
"@cite_20",
"@cite_12",
"@cite_11"
],
"mid": [
"2105101328",
"2187051054",
"2172093828",
"1944615693",
"2197046994",
"2130103520",
"2012184117",
"2167331599",
"",
"1496571393",
"1530781137",
"2068994826",
"",
"2076756823"
],
"abstract": [
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"In this paper, we propose a novel approach to segment moving object in video by utilizing improved point trajectories . First, point trajectories are densely sampled from video and tracked through optical flow, which provides information of long-term temporal interactions among objects in the video sequence . Second, a novel affinity measurement method considering both global and local information of point trajectories is proposed to cluster trajectories into groups. Finally, we propose a new graph-based segmentation method which adopts both local and global motion information encoded by the tracked dense point trajectories. The proposed approach achieves good performance on trajectory clustering, and it also obtains accurate video object segmentation results on both the Moseg dataset and our new dataset containing more challenging videos.",
"We propose a detection-free system for segmenting multiple interacting and deforming people in a video. People detectors often fail under close agent interaction, limiting the performance of detection based tracking methods. Motion information often fails to separate similarly moving agents or to group distinctly moving articulated body parts. We formulate video segmentation as graph partitioning in the trajectory domain. We classify trajectories as foreground or background based on trajectory saliencies, and use foreground trajectories as graph nodes. We incorporate object connectedness constraints into our trajectory weight matrix based on topology of foreground: we set repulsive weights between trajectories that belong to different connected components in any frame of their time intersection. Attractive weights are set between similarly moving trajectories. Information from foreground topology complements motion information and our spatiotemporal segments can be interpreted as connected moving entities rather than just trajectory groups of similar motion. All our cues are computed on trajectories and naturally encode large temporal context, which is crucial for resolving local in time ambiguities. We present results of our approach on challenging datasets outperforming by far the state of the art.",
"Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24]. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMD-B51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deep-learned features [24]. Our method also achieves superior performance to the state of the art on these datasets.",
"For the segmentation of moving objects in videos, the analysis of long-term point trajectories has been very popular recently. In this paper, we formulate the segmentation of a video sequence based on point trajectories as a minimum cost multicut problem. Unlike the commonly used spectral clustering formulation, the minimum cost multicut formulation gives natural rise to optimize not only for a cluster assignment but also for the number of clusters while allowing for varying cluster sizes. In this setup, we provide a method to create a long-term point trajectory graph with attractive and repulsive binary terms and outperform state-of-the-art methods based on spectral clustering on the FBMS-59 dataset and on the motion subtask of the VSB100 dataset.",
"No feature-based vision system can work unless good features can be identified and tracked from frame to frame. Although tracking itself is by and large a solved problem, selecting features that can be tracked well and correspond to physical points in the world is still hard. We propose a feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world. These methods are based on a new tracking algorithm that extends previous Newton-Raphson style search methods to work under affine image transformations. We test performance with several simulations and experiments. >",
"Motion segmentation based on point trajectories can integrate information of a whole video shot to detect and separate moving objects. Commonly, similarities are defined between pairs of trajectories. However, pairwise similarities restrict the motion model to translations. Non-translational motion, such as rotation or scaling, is penalized in such an approach. We propose to define similarities on higher order tuples rather than pairs, which leads to hypergraphs. To apply spectral clustering, the hypergraph is transferred to an ordinary graph, an operation that can be interpreted as a projection. We propose a specific nonlinear projection via a regularized maximum operator, and show that it yields significant improvements both compared to pairwise similarities and alternative hypergraph projections.",
"Our goal is to segment a video sequence into moving objects and the world scene. In recent work, spectral embedding of point trajectories based on 2D motion cues accumulated from their lifespans, has shown to outperform factorization and per frame segmentation methods for video segmentation. The scale and kinematic nature of the moving objects and the background scene determine how close or far apart trajectories are placed in the spectral embedding. Such density variations may confuse clustering algorithms, causing over-fragmentation of object interiors. Therefore, instead of clustering in the spectral embedding, we propose detecting discontinuities of embedding density between spatially neighboring trajectories. Detected discontinuities are strong indicators of object boundaries and thus valuable for video segmentation. We propose a novel embedding discretization process that recovers from over-fragmentations by merging clusters according to discontinuity evidence along inter-cluster boundaries. For segmenting articulated objects, we combine motion grouping cues with a center-surround saliency operation, resulting in “context-aware”, spatially coherent, saliency maps. Figure-ground segmentation obtained from saliency thresholding, provides object connectedness constraints that alter motion based trajectory affinities, by keeping articulated parts together and separating disconnected in time objects. Finally, we introduce Gabriel graphs as effective per frame superpixel maps for converting trajectory clustering to dense image segmentation. Gabriel edges bridge large contour gaps via geometric reasoning without over-segmenting coherent image regions. We present experimental results of our method that outperform the state-of-the-art in challenging motion segmentation datasets.",
"",
"Unsupervised learning requires a grouping step that defines which data belong together. A natural way of grouping in images is the segmentation of objects or parts of objects. While pure bottom-up segmentation from static cues is well known to be ambiguous at the object level, the story changes as soon as objects move. In this paper, we present a method that uses long term point trajectories based on dense optical flow. Defining pair-wise distances between these trajectories allows to cluster them, which results in temporally consistent segmentations of moving objects in a video shot. In contrast to multi-body factorization, points and even whole objects may appear or disappear during the shot. We provide a benchmark dataset and an evaluation method for this so far uncovered setting.",
"Dense and accurate motion tracking is an important requirement for many video feature extraction algorithms. In this paper we provide a method for computing point trajectories based on a fast parallel implementation of a recent optical flow algorithm that tolerates fast motion. The parallel implementation of large displacement optical flow runs about 78× faster than the serial C++ version. This makes it practical to use in a variety of applications, among them point tracking. In the course of obtaining the fast implementation, we also proved that the fixed point matrix obtained in the optical flow technique is positive semi-definite. We compare the point tracking to the most commonly used motion tracker - the KLT tracker - on a number of sequences with ground truth motion. Our resulting technique tracks up to three orders of magnitude more points and is 46 more accurate than the KLT tracker. It also provides a tracking density of 48 and has an occlusion error of 3 compared to a density of 0.1 and occlusion error of 8 for the KLT tracker. Compared to the Particle Video tracker, we achieve 66 better accuracy while retaining the ability to handle large displacements while running an order of magnitude faster.",
"Video provides not only rich visual cues such as motion and appearance, but also much less explored long-range temporal interactions among objects. We aim to capture such interactions and to construct a powerful intermediate-level video representation for subsequent recognition. Motivated by this goal, we seek to obtain spatio-temporal oversegmentation of a video into regions that respect object boundaries and, at the same time, associate object pixels over many video frames. The contributions of this paper are two-fold. First, we develop an efficient spatiotemporal video segmentation algorithm, which naturally incorporates long-range motion cues from the past and future frames in the form of clusters of point tracks with coherent motion. Second, we devise a new track clustering cost function that includes occlusion reasoning, in the form of depth ordering constraints, as well as motion similarity along the tracks. We evaluate the proposed approach on a challenging set of video sequences of office scenes from feature length movies.",
"",
"Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects."
]
} |
1702.08640 | 2749835604 | Conventional video segmentation approaches rely heavily on appearance models. Such methods often use appearance descriptors that have limited discriminative power under complex scenarios. To improve the segmentation performance, this paper presents a pyramid histogram-based confidence map that incorporates structure information into appearance statistics. It also combines geodesic distance-based dynamic models. Then, it employs an efficient measure of uncertainty propagation using local classifiers to determine the image regions, where the object labels might be ambiguous. The final foreground cutout is obtained by refining on the uncertain regions. Additionally, to reduce manual labeling, our method determines the frames to be labeled by the human operator in a principled manner, which further boosts the segmentation performance and minimizes the labeling effort. Our extensive experimental analyses on two big benchmarks demonstrate that our solution achieves superior performance, favorable computational efficiency, and reduced manual labeling in comparison to the state of the art. | Compared with unsupervised video segmentation methods, interactive video segmentation extracts foreground object from video sequences with the guidance of human interaction. Typical interactive methods propagate the given annotations to the entire video sequence, by tracking them using spatiotemporal Markov random fields based probabilistic models @cite_3 @cite_2 @cite_12 @cite_19 @cite_24 , or frame-matching based propagation @cite_42 @cite_38 @cite_13 @cite_27 , and employing various features such as color, shape, and motion. | {
"cite_N": [
"@cite_38",
"@cite_42",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"1975862670",
"2011953904",
"1560354729",
"1972702506",
"",
"2102840968",
"",
""
],
"abstract": [
"",
"Although tremendous success has been achieved for interactive object cutout in still images, accurately extracting dynamic objects in video remains a very challenging problem. Previous video cutout systems present two major limitations: (1) reliance on global statistics, thus lacking the ability to deal with complex and diverse scenes; and (2) treating segmentation as a global optimization, thus lacking a practical workflow that can guarantee the convergence of the systems to the desired results. We present Video SnapCut, a robust video object cutout system that significantly advances the state-of-the-art. In our system segmentation is achieved by the collaboration of a set of local classifiers, each adaptively integrating multiple local image features. We show how this segmentation paradigm naturally supports local user editing and propagates them across time. The object cutout system is completed with a novel coherent video matting technique. A comprehensive evaluation and comparison is presented, demonstrating the effectiveness of the proposed system at achieving high quality results, as well as the robustness of the system against various types of inputs.",
"This paper proposes a probabilistic graphical model for the problem of propagating labels in video sequences, also termed the label propagation problem. Given a limited amount of hand labelled pixels, typically the start and end frames of a chunk of video, an EM based algorithm propagates labels through the rest of the frames of the video sequence. As a result, the user obtains pixelwise labelled video sequences along with the class probabilities at each pixel. Our novel algorithm provides an essential tool to reduce tedious hand labelling of video sequences, thus producing copious amounts of useable ground truth data. A novel application of this algorithm is in semi-supervised learning of discriminative classifiers for video segmentation and scene parsing. The label propagation scheme can be based on pixel-wise correspondences obtained from motion estimation, image patch based similarities as seen in epitomic models or even the more recent, semantically consistent hierarchical regions. We compare the abilities of each of these variants, both via quantitative and qualitative studies against ground truth data. We then report studies on a state of the art Random forest classifier based video segmentation scheme, trained using fully ground truth data and with data obtained from label propagation. The results of this study strongly support and encourage the use of the proposed label propagation algorithm.",
"Manually segmenting and labeling objects in video sequences is quite tedious, yet such annotations are valuable for learning-based approaches to object and activity recognition. While automatic label propagation can help, existing methods simply propagate annotations from arbitrarily selected frames (e.g., the first one) and so may fail to best leverage the human effort invested. We define an active frame selection problem: select k frames for manual labeling, such that automatic pixel-level label propagation can proceed with minimal expected error. We propose a solution that directly ties a joint frame selection criterion to the predicted errors of a flow-based random field propagation model. It selects the set of k frames that together minimize the total mislabeling risk over the entire sequence. We derive an efficient dynamic programming solution to optimize the criterion. Further, we show how to automatically determine how many total frames k should be labeled in order to minimize the total manual effort spent labeling and correcting propagation errors. We demonstrate our method's clear advantages over several baselines, saving hours of human effort per video.",
"We present a novel, implementation friendly and occlusion aware semi-supervised video segmentation algorithm using tree structured graphical models, which delivers pixel labels alongwith their uncertainty estimates. Our motivation to employ supervision is to tackle a task-specific segmentation problem where the semantic objects are pre-defined by the user. The video model we propose for this problem is based on a tree structured approximation of a patch based undirected mixture model, which includes a novel time-series and a soft label Random Forest classifier participating in a feedback mechanism. We demonstrate the efficacy of our model in cutting out foreground objects and multi-class segmentation problems in lengthy and complex road scene sequences. Our results have wide applicability, including harvesting labelled video data for training discriminative models, shape pose articulation learning and large scale statistical analysis to develop priors for video segmentation.",
"",
"Video sequences contain many cues that may be used to segment objects in them, such as color, gradient, color adjacency, shape, temporal coherence, camera and object motion, and easily-trackable points. This paper introduces LIVEcut, a novel method for interactively selecting objects in video sequences by extracting and leveraging as much of this information as possible. Using a graph-cut optimization framework, LIVEcut propagates the selection forward frame by frame, allowing the user to correct any mistakes along the way if needed. Enhanced methods of extracting many of the features are provided. In order to use the most accurate information from the various potentially-conflicting features, each feature is automatically weighted locally based on its estimated accuracy using the previous implicitly-validated frame. Feature weights are further updated by learning from the user corrections required in the previous frame. The effectiveness of LIVEcut is shown through timing comparisons to other interactive methods, accuracy comparisons to unsupervised methods, and qualitatively through selections on various video sequences.",
"",
""
]
} |
1702.08640 | 2749835604 | Conventional video segmentation approaches rely heavily on appearance models. Such methods often use appearance descriptors that have limited discriminative power under complex scenarios. To improve the segmentation performance, this paper presents a pyramid histogram-based confidence map that incorporates structure information into appearance statistics. It also combines geodesic distance-based dynamic models. Then, it employs an efficient measure of uncertainty propagation using local classifiers to determine the image regions, where the object labels might be ambiguous. The final foreground cutout is obtained by refining on the uncertain regions. Additionally, to reduce manual labeling, our method determines the frames to be labeled by the human operator in a principled manner, which further boosts the segmentation performance and minimizes the labeling effort. Our extensive experimental analyses on two big benchmarks demonstrate that our solution achieves superior performance, favorable computational efficiency, and reduced manual labeling in comparison to the state of the art. | In graphics community, many supervised video segmentation approaches @cite_48 @cite_42 @cite_29 @cite_14 are also proposed, often with intensive human interactions. An early work, named as Video SnapCut @cite_42 , incorporates a set of local classifiers using multiple local image features and integrates a propagation based shape model into color models. Zhong @cite_29 introduces directional classifiers to handle temporal discontinuities while remaining robust to inseparable color statistics. More recently, @cite_14 obtains the segmentation at each frame by transferring the foreground mask using nearest-neighbor fields. | {
"cite_N": [
"@cite_14",
"@cite_48",
"@cite_29",
"@cite_42"
],
"mid": [
"2016163842",
"2117435890",
"2088780408",
"1975862670"
],
"abstract": [
"We introduce JumpCut, a new mask transfer and interpolation method for interactive video cutout. Given a source frame for which a foreground mask is already available, we compute an estimate of the foreground mask at another, typically non-successive, target frame. Observing that the background and foreground regions typically exhibit different motions, we leverage these differences by computing two separate nearest-neighbor fields (split-NNF) from the target to the source frame. These NNFs are then used to jointly predict a coherent labeling of the pixels in the target frame. The same split-NNF is also used to aid a novel edge classifier in detecting silhouette edges (S-edges) that separate the foreground from the background. A modified level set method is then applied to produce a clean mask, based on the pixel labels and the S-edges computed by the previous two steps. The resulting mask transfer method may also be used for coherently interpolating the foreground masks between two distant source frames. Our results demonstrate that the proposed method is significantly more accurate than the existing state-of-the-art on a wide variety of video sequences. Thus, it reduces the required amount of user effort, and provides a basis for an effective interactive video object cutout tool.",
"We present an interactive system for efficiently extracting foreground objects from a video. We extend previous min-cut based image segmentation techniques to the domain of video with four new contributions. We provide a novel painting-based user interface that allows users to easily indicate the foreground object across space and time. We introduce a hierarchical mean-shift preprocess in order to minimize the number of nodes that min-cut must operate on. Within the min-cut we also define new local cost functions to augment the global costs defined in earlier work. Finally, we extend 2D alpha matting methods designed for images to work with 3D video volumes. We demonstrate that our matting approach preserves smoothness across both space and time. Our interactive video cutout system allows users to quickly extract foreground objects from video sequences for use in a variety of applications including compositing onto new backgrounds and NPR cartoon style rendering.",
"Existing video object cutout systems can only deal with limited cases. They usually require detailed user interactions to segment real-life videos, which often suffer from both inseparable statistics (similar appearance between foreground and background) and temporal discontinuities (e.g. large movements, newly-exposed regions following disocclusion or topology change). In this paper, we present an efficient video cutout system to meet this challenge. A novel directional classifier is proposed to handle temporal discontinuities robustly, and then multiple classifiers are incorporated to cover a variety of cases. The outputs of these classifiers are integrated via another classifier, which is learnt from real examples. The foreground matte is solved by a coherent matting procedure, and remaining errors can be removed easily by additive spatio-temporal local editing. Experiments demonstrate that our system performs more robustly and more intelligently than existing systems in dealing with various input types, thus saving a lot of user labor and time.",
"Although tremendous success has been achieved for interactive object cutout in still images, accurately extracting dynamic objects in video remains a very challenging problem. Previous video cutout systems present two major limitations: (1) reliance on global statistics, thus lacking the ability to deal with complex and diverse scenes; and (2) treating segmentation as a global optimization, thus lacking a practical workflow that can guarantee the convergence of the systems to the desired results. We present Video SnapCut, a robust video object cutout system that significantly advances the state-of-the-art. In our system segmentation is achieved by the collaboration of a set of local classifiers, each adaptively integrating multiple local image features. We show how this segmentation paradigm naturally supports local user editing and propagates them across time. The object cutout system is completed with a novel coherent video matting technique. A comprehensive evaluation and comparison is presented, demonstrating the effectiveness of the proposed system at achieving high quality results, as well as the robustness of the system against various types of inputs."
]
} |
1702.08640 | 2749835604 | Conventional video segmentation approaches rely heavily on appearance models. Such methods often use appearance descriptors that have limited discriminative power under complex scenarios. To improve the segmentation performance, this paper presents a pyramid histogram-based confidence map that incorporates structure information into appearance statistics. It also combines geodesic distance-based dynamic models. Then, it employs an efficient measure of uncertainty propagation using local classifiers to determine the image regions, where the object labels might be ambiguous. The final foreground cutout is obtained by refining on the uncertain regions. Additionally, to reduce manual labeling, our method determines the frames to be labeled by the human operator in a principled manner, which further boosts the segmentation performance and minimizes the labeling effort. Our extensive experimental analyses on two big benchmarks demonstrate that our solution achieves superior performance, favorable computational efficiency, and reduced manual labeling in comparison to the state of the art. | Tracking of segmentation results in video sequences has also been investigated by many works @cite_47 @cite_24 @cite_30 @cite_44 @cite_35 @cite_43 @cite_15 @cite_28 in computer vision research. The segmentation is obtained by either solving an optimization problem with patch seams across frames @cite_49 , using fully connected graphs to model long-range spatiotemporal connections @cite_36 , operating on bilateral space @cite_21 , or adopting super-trajectory for capturing more rich information @cite_28 . Such methods often require human annotations in the first frame, and then track them in the rest of the video. Recently, the work in @cite_24 proposed an active frame selection method to select a subset of frames that together minimize the total mislabeling risk over the entire sequence. However, this strategy is computationally expensive. The propagation error prediction is done using a forward and backward pixel flow model, which is impractical for user-in-the-loop applications. In contrast, our model for predicting propagation error is based on superpixel-level matching, which is more efficient. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_15",
"@cite_28",
"@cite_36",
"@cite_21",
"@cite_24",
"@cite_44",
"@cite_43",
"@cite_49",
"@cite_47"
],
"mid": [
"",
"",
"",
"2108424265",
"2212077366",
"2463175074",
"1560354729",
"",
"1904248166",
"2009874829",
"2062563118"
],
"abstract": [
"",
"",
"",
"We present a novel image superpixel segmentation approach using the proposed lazy random walk (LRW) algorithm in this paper. Our method begins with initializing the seed positions and runs the LRW algorithm on the input image to obtain the probabilities of each pixel. Then, the boundaries of initial superpixels are obtained according to the probabilities and the commute time. The initial superpixels are iteratively optimized by the new energy function, which is defined on the commute time and the texture measurement. Our LRW algorithm with self-loops has the merits of segmenting the weak boundaries and complicated texture regions very well by the new global probability maps and the commute time strategy. The performance of superpixel is improved by relocating the center positions of superpixels and dividing the large superpixels into small ones with the proposed optimization algorithm. The experimental results have demonstrated that our method achieves better performance than previous superpixel approaches.",
"We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.",
"In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.",
"Manually segmenting and labeling objects in video sequences is quite tedious, yet such annotations are valuable for learning-based approaches to object and activity recognition. While automatic label propagation can help, existing methods simply propagate annotations from arbitrarily selected frames (e.g., the first one) and so may fail to best leverage the human effort invested. We define an active frame selection problem: select k frames for manual labeling, such that automatic pixel-level label propagation can proceed with minimal expected error. We propose a solution that directly ties a joint frame selection criterion to the predicted errors of a flow-based random field propagation model. It selects the set of k frames that together minimize the total mislabeling risk over the entire sequence. We derive an efficient dynamic programming solution to optimize the criterion. Further, we show how to automatically determine how many total frames k should be labeled in order to minimize the total manual effort spent labeling and correcting propagation errors. We demonstrate our method's clear advantages over several baselines, saving hours of human effort per video.",
"",
"We present a novel Joint Online Tracking and Segmentation (JOTS) algorithm which integrates the multi-part tracking and segmentation into a unified energy optimization framework to handle the video segmentation task. The multi-part segmentation is posed as a pixel-level label assignment task with regularization according to the estimated part models, and tracking is formulated as estimating the part models based on the pixel labels, which in turn is used to refine the model. The multi-part tracking and segmentation are carried out iteratively to minimize the proposed objective function by a RANSAC-style approach. Extensive experiments on the SegTrack and SegTrack v2 databases demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods.",
"In this paper, we propose a technique for video object segmentation using patch seams across frames. Typically, seams, which are connected paths of low energy, are utilised for retargeting, where the primary aim is to reduce the image size while preserving the salient image contents. Here, we adapt the formulation of seams for temporal label propagation. The energy function associated with the proposed video seams provides temporal linking of patches across frames, to accurately segment the object. The proposed energy function takes into account the similarity of patches along the seam, temporal consistency of motion and spatial coherency of seams. Label propagation is achieved with high fidelity in the critical boundary regions, utilising the proposed patch seams. To achieve this without additional overheads, we curtail the error propagation by formulating boundary regions as rough-sets. The proposed approach out-perform state-of-the-art supervised and unsupervised algorithms, on benchmark datasets.",
"We present a novel off-line algorithm for target segmentation and tracking in video. In our approach, video data is represented by a multi-label Markov Random Field model, and segmentation is accomplished by finding the minimum energy label assignment. We propose a novel energy formulation which incorporates both segmentation and motion estimation in a single framework. Our energy functions enforce motion coherence both within and across frames. We utilize state-of-the-art methods to efficiently optimize over a large number of discrete labels. In addition, we introduce a new ground-truth dataset, called Georgia Tech Segmentation and Tracking Dataset (GT-SegTrack), for the evaluation of segmentation accuracy in video tracking. We compare our method with several recent on-line tracking algorithms and provide quantitative and qualitative performance comparisons."
]
} |
1702.08727 | 2593413655 | Algorithm learning is a core problem in artificial intelligence with significant implications on automation level that can be achieved by machines. Recently deep learning methods are emerging for synthesizing an algorithm from its input-output examples, the most successful being the Neural GPU, capable of learning multiplication. We present several improvements to the Neural GPU that substantially reduces training time and improves generalization. We introduce a new technique - hard nonlinearities with saturation costs- that has general applicability. We also introduce a technique of diagonal gates that can be applied to active-memory models. The proposed architecture is the first capable of learning decimal multiplication end-to-end. | Recurrent networks have a simple computation cell that is unrolled in time according to the length of an input sequence. Such approach is employed by LSTM @cite_8 and GRU @cite_5 networks for sequence classification. Thy scale with sequence length but each cell has a constant amount of memory that essentially limits the learnable problems to regular languages. | {
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"2950635152",
"2061079066"
],
"abstract": [
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"There has been a great deal of theoretical and experimental work in computer science on inductive inference systems, that is, systems that try to infer general rules from examples. However, a complete and applicable theory of such systems is still a distant goal. This survey highlights and explains the main ideas that have been developed in the study of inductive inference, with special emphasis on the relations between the general theory and the specific algorithms and implementations. 154 references."
]
} |
1702.08727 | 2593413655 | Algorithm learning is a core problem in artificial intelligence with significant implications on automation level that can be achieved by machines. Recently deep learning methods are emerging for synthesizing an algorithm from its input-output examples, the most successful being the Neural GPU, capable of learning multiplication. We present several improvements to the Neural GPU that substantially reduces training time and improves generalization. We introduce a new technique - hard nonlinearities with saturation costs- that has general applicability. We also introduce a technique of diagonal gates that can be applied to active-memory models. The proposed architecture is the first capable of learning decimal multiplication end-to-end. | There are several more elaborate architectures proposed for algorithm learning. @cite_22 developed a Neural Turing Machine capable of learning and executing simple programs such as repeat copying, simple priority sorting, and associative recall. They use complicated memory addressing to make the model differentiable. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2950527759"
],
"abstract": [
"We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples."
]
} |
1702.08727 | 2593413655 | Algorithm learning is a core problem in artificial intelligence with significant implications on automation level that can be achieved by machines. Recently deep learning methods are emerging for synthesizing an algorithm from its input-output examples, the most successful being the Neural GPU, capable of learning multiplication. We present several improvements to the Neural GPU that substantially reduces training time and improves generalization. We introduce a new technique - hard nonlinearities with saturation costs- that has general applicability. We also introduce a technique of diagonal gates that can be applied to active-memory models. The proposed architecture is the first capable of learning decimal multiplication end-to-end. | Grid LSTM @cite_13 allow explicit unrolling along time and memory dimension and are able to learn such tasks as addition and memorization. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1771459135"
],
"abstract": [
"This paper introduces Grid Long Short-Term Memory, a network of LSTM cells arranged in a multidimensional grid that can be applied to vectors, sequences or higher dimensional data such as images. The network differs from existing deep LSTM architectures in that the cells are connected between network layers as well as along the spatiotemporal dimensions of the data. The network provides a unified way of using LSTM for both deep and sequential computation. We apply the model to algorithmic tasks such as 15-digit integer addition and sequence memorization, where it is able to significantly outperform the standard LSTM. We then give results for two empirical tasks. We find that 2D Grid LSTM achieves 1.47 bits per character on the Wikipedia character prediction benchmark, which is state-of-the-art among neural approaches. In addition, we use the Grid LSTM to define a novel two-dimensional translation model, the Reencoder, and show that it outperforms a phrase-based reference system on a Chinese-to-English translation task."
]
} |
1702.08727 | 2593413655 | Algorithm learning is a core problem in artificial intelligence with significant implications on automation level that can be achieved by machines. Recently deep learning methods are emerging for synthesizing an algorithm from its input-output examples, the most successful being the Neural GPU, capable of learning multiplication. We present several improvements to the Neural GPU that substantially reduces training time and improves generalization. We introduce a new technique - hard nonlinearities with saturation costs- that has general applicability. We also introduce a technique of diagonal gates that can be applied to active-memory models. The proposed architecture is the first capable of learning decimal multiplication end-to-end. | Neural GPU @cite_25 is the current state of the art in deep algorithm learning. It can learn fairly complicated algorithms such as addition and binary multiplication, but only a small fraction of the trained models generalize well. The authors train 729 models to find one that generalizes well. They have no success for training decimal multiplication. @cite_21 is able to train Neural GPU on decimal multiplication by using curriculum learning when the same model is trained at first for binary multiplication then for base-4 and only then for decimal. | {
"cite_N": [
"@cite_21",
"@cite_25"
],
"mid": [
"2548137223",
"2173051530"
],
"abstract": [
"The Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. The latter requires a memory efficient implementation, as a naive implementation of the Neural GPU is memory intensive. We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before). We have also been able to train the Neural GPU to evaluate long arithmetic expressions with multiple operands that require respecting the precedence order of the operands, although these have succeeded only in their binary representation, and not with perfect accuracy. @PARASPLIT In addition, we gain insight into the Neural GPU by investigating its failure modes. We find that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on @math while succeeding at @math . These failure modes are reminiscent of adversarial examples.",
"Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization."
]
} |
1702.08563 | 2913524896 | Often when multiple labels are obtained for a training example it is assumed that there is an element of noise that must be accounted for. It has been shown that this disagreement can be considered signal instead of noise. In this work we investigate using soft labels for training data to improve generalization in machine learning models. However, using soft labels for training Deep Neural Networks (DNNs) is not practical due to the costs involved in obtaining multiple labels for large data sets. We propose soft label memorization-generalization (SLMG), a fine-tuning approach to using soft labels for training DNNs. We assume that differences in labels provided by human annotators represent ambiguity about the true label instead of noise. Experiments with SLMG demonstrate improved generalization performance on the Natural Language Inference (NLI) task. Our experiments show that by injecting a small percentage of soft label training data (0.03 of training set size) we can improve generalization performance over several baselines. | This work is related to the idea of crowd truth'' and collecting and using annotations from the crowd @cite_15 @cite_24 . We use the CrowdTruth assumption that disagreement between annotators provides signal about data ambiguity and should be used in the learning process. In addition this work is closely related to the idea of Label Distribution Learning (LDL) from Computer Vision (CV) @cite_7 . For training and testing, LDL assumes that @math is a probability distribution over labels. With LDL, the goal is to learn a distribution over labels. However in our case we would still like to learn a classifier that outputs a single class, while using the distribution over training labels as a measure of uncertainty in the data. We use the distribution over labels to represent the uncertainty associated with different examples in order to improve model training. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_7"
],
"mid": [
"1494186835",
"191327111",
""
],
"abstract": [
"In this paper we introduce the CrowdTruth open-source software framework for machine-human computation, that implements a novel approach to gathering human annotation data for a variety of media (e.g. text, image, video). The CrowdTruth approach embodied in the software captures human semantics through a pipeline of four processes: a) combining various machine processing of media in order to better understand the input content and optimize its suitability for micro-tasks, thus optimize the time and cost of the crowdsourcing process; b) providing reusable human-computing task templates to collect the maximum diversity in the human interpretation, thus collect richer human semantics; c) implementing 'disagreement metrics', i.e. CrowdTruth metrics, to support deep analysis of the quality and semantics of the crowdsourcing data; and d) providing an interface to support data and results visualization. Instead of the traditional inter-annotator agreement, we use their disagreement as a useful signal to evaluate the data quality, ambiguity and vagueness. We demonstrate the applicability and robustness of this approach to a variety of problems across multiple domains. Moreover, we show the advantages of using open standards and the extensibility of the framework with new data modalities and annotation tasks.",
"Recently crowdsourcing services are often used to collect a large amount of labeled data for machine learning, since they provide us an easy way to get labels at very low cost and in a short period. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with the variable quality of crowd-generated data. Although there have been many recent attempts to address the quality problem of multiple workers, only a few of the existing methods consider the problem of learning classifiers directly from such noisy data. All these methods modeled the true labels as latent variables, which resulted in non-convex optimization problems. In this paper, we propose a convex optimization formulation for learning from crowds without estimating the true labels by introducing personal models of the individual crowd workers. We also devise an efficient iterative method for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers. We evaluate the proposed method against three competing methods on synthetic data sets and a real crowdsourced data set and demonstrate that the proposed method outperforms the other three methods.",
""
]
} |
1702.08648 | 2745247244 | In this paper, we discuss a different type of semi-supervised setting: a coarse level of labeling is available for all observations but the model has to learn a fine level of latent annotation for each one of them. Problems in this setting are likely to be encountered in many domains such as text categorization, protein function prediction, image classification as well as in exploratory scientific studies such as medical and genomics research. We consider this setting as simultaneously performed supervised classification (per the available coarse labels) and unsupervised clustering (within each one of the coarse labels) and propose a novel output layer modification called auto-clustering output layer (ACOL) that allows concurrent classification and clustering based on Graph-based Activity Regularization (GAR) technique. As the proposed output layer modification duplicates the softmax nodes at the output layer for each class, GAR allows for competitive learning between these duplicates on a traditional error-correction learning framework to ultimately enable a neural network to learn the latent annotations in this partially supervised setup. We demonstrate how the coarse label supervision impacts performance and helps propagate useful clustering information between sub-classes. Comparative tests on three of the most popular image datasets MNIST, SVHN and CIFAR-100 rigorously demonstrate the effectiveness and competitiveness of the proposed approach. | GAR is a scalable and efficient graph-based approach which is originally proposed for the classical type of semi-supervised learning problems where a large number of observations with only a small subset of corresponding labels exist. In this paper, we adopt the same regularization terms and show that these terms can be employed to reveal the latent annotations in a supervised setup through ACOL. Before describing ACOL, this section briefly summarizes the activity regularization proposed in @cite_19 . | {
"cite_N": [
"@cite_19"
],
"mid": [
"2964138341"
],
"abstract": [
"Abstract In this paper, we propose a novel graph-based approach for semi-supervised learning problems, which considers an adaptive adjacency of the examples throughout the unsupervised portion of the training. Adjacency of the examples is inferred using the predictions of a neural network model which is first initialized by a supervised pretraining. These predictions are then updated according to a novel unsupervised objective which regularizes another adjacency, now linking the output nodes. Regularizing the adjacency of the output nodes, inferred from the predictions of the network, creates an easier optimization problem and ultimately provides that the predictions of the network turn into the optimal embedding. Ultimately, the proposed framework provides an effective and scalable graph-based solution which is natural to the operational mechanism of deep neural networks. Our results show comparable performance with state-of-the-art generative approaches for semi-supervised learning on an easier-to-train, low-cost framework."
]
} |
1702.08826 | 2952402017 | In many areas of data mining, data is collected from humans beings. In this contribution, we ask the question of how people actually respond to ordinal scales. The main problem observed is that users tend to be volatile in their choices, i.e. complex cognitions do not always lead to the same decisions, but to distributions of possible decision outputs. This human uncertainty may sometimes have quite an impact on common data mining approaches and thus, the question of effective modelling this so called human uncertainty emerges naturally. Our contribution introduces two different approaches for modelling the human uncertainty of user responses. In doing so, we develop techniques in order to measure this uncertainty at the level of user inputs as well as the level of user cognition. With support of comprehensive user experiments and large-scale simulations, we systematically compare both methodologies along with their implications for personalisation approaches. Our findings demonstrate that significant amounts of users do submit something completely different (action) than they really have in mind (cognition). Moreover, we demonstrate that statistically sound evidence with respect to algorithm assessment becomes quite hard to realise, especially when explicit rankings shall be built. | In this paper, we exemplify our approach using recommender systems @cite_9 and focus specifically on the validity of human uncertainty measurements in rating scenarios. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1690919088"
],
"abstract": [
"The explosive growth of e-commerce and online environments has made the issue of information search and selection increasingly serious; users are overloaded by options to consider and they may not have the time or knowledge to personally evaluate these options. Recommender systems have proven to be a valuable way for online users to cope with the information overload and have become one of the most powerful and popular tools in electronic commerce. Correspondingly, various techniques for recommendation generation have been proposed. During the last decade, many of them have also been successfully deployed in commercial environments. Recommender Systems Handbook, an edited volume, is a multi-disciplinary effort that involves world-wide experts from diverse fields, such as artificial intelligence, human computer interaction, information technology, data mining, statistics, adaptive user interfaces, decision support systems, marketing, and consumer behavior. Theoreticians and practitioners from these fields continually seek techniques for more efficient, cost-effective and accurate recommender systems. This handbook aims to impose a degree of order on this diversity, by presenting a coherent and unified repository of recommender systems major concepts, theories, methodologies, trends, challenges and applications. Extensive artificial applications, a variety of real-world applications, and detailed case studies are included. Recommender Systems Handbook illustrates how this technology can support the user in decision-making, planning and purchasing processes. It works for well known corporations such as Amazon, Google, Microsoft and AT&T. This handbook is suitable for researchers and advanced-level students in computer science as a reference."
]
} |
1702.08826 | 2952402017 | In many areas of data mining, data is collected from humans beings. In this contribution, we ask the question of how people actually respond to ordinal scales. The main problem observed is that users tend to be volatile in their choices, i.e. complex cognitions do not always lead to the same decisions, but to distributions of possible decision outputs. This human uncertainty may sometimes have quite an impact on common data mining approaches and thus, the question of effective modelling this so called human uncertainty emerges naturally. Our contribution introduces two different approaches for modelling the human uncertainty of user responses. In doing so, we develop techniques in order to measure this uncertainty at the level of user inputs as well as the level of user cognition. With support of comprehensive user experiments and large-scale simulations, we systematically compare both methodologies along with their implications for personalisation approaches. Our findings demonstrate that significant amounts of users do submit something completely different (action) than they really have in mind (cognition). Moreover, we demonstrate that statistically sound evidence with respect to algorithm assessment becomes quite hard to realise, especially when explicit rankings shall be built. | The complexity of human perception and cognition can be addressed by means of latent distributions. This idea is widely used in cognitive science and in statistical models for ordinal data. For example, so-called CUB models for ordinal data @cite_12 assume the Gaussian as a latent response model underlying the observations. We adopt the idea of modelling user uncertainty by means of individual Gaussians following the argumentation in @cite_12 for constructing our individual response models. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2060074015"
],
"abstract": [
"In this article we introduce a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion, possibly present in ordinal data surveys. The choice of the components of the new model is motivated by a study on the data generating process. Inferential issues concerning the maximum likelihood estimates and the validation steps are presented; then, some empirical analyses are given to support the usefulness of the approach. Discussion on further extensions of the model ends the article."
]
} |
1702.08826 | 2952402017 | In many areas of data mining, data is collected from humans beings. In this contribution, we ask the question of how people actually respond to ordinal scales. The main problem observed is that users tend to be volatile in their choices, i.e. complex cognitions do not always lead to the same decisions, but to distributions of possible decision outputs. This human uncertainty may sometimes have quite an impact on common data mining approaches and thus, the question of effective modelling this so called human uncertainty emerges naturally. Our contribution introduces two different approaches for modelling the human uncertainty of user responses. In doing so, we develop techniques in order to measure this uncertainty at the level of user inputs as well as the level of user cognition. With support of comprehensive user experiments and large-scale simulations, we systematically compare both methodologies along with their implications for personalisation approaches. Our findings demonstrate that significant amounts of users do submit something completely different (action) than they really have in mind (cognition). Moreover, we demonstrate that statistically sound evidence with respect to algorithm assessment becomes quite hard to realise, especially when explicit rankings shall be built. | The impacts of human uncertainty for recommendation results have been frequently discussed in recent work from different perspectives. Observations presented in @cite_5 @cite_6 have shown that it can significantly influence results of recommender evaluation. The methodology applied there is based on repeating rating scenarios for same users-items-pairs and represents the current standard in latest research such as @cite_10 . In this paper, the same methodology is explored and compared to a new Bayesian method, which we have derived from the from latest research on cognitions of uncertainty in educational scenarios @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_6"
],
"mid": [
"2164918230",
"1872493208",
"2148943147",
""
],
"abstract": [
"The ideas of first year university students about measurement in the physics laboratory are explored. Student responses to written probes administered at the beginning of the year are compared to those written after a 12 week laboratory course. The 'point' and 'set' paradigms are used as a model to analyse the responses to the probes. At the heart of the point paradigm is that both action and reasoning are based solely on individual measurements in a data set. On the other hand, subscribing to the set paradigm implies an understanding that a series of measurements are to be viewed as a collective that can be modelled by theoretical constructs, such as the mean and standard deviation. The degree of consistent use of these paradigms by individual students across the sets of probes is investigated. Implications for effective teaching interventions in the physics laboratory are discussed.",
"Recent growing interest in predicting and influencing consumer behavior has generated a parallel increase in research efforts on Recommender Systems. Many of the state-of-the-art Recommender Systems algorithms rely on obtaining user ratings in order to later predict unknown ratings. An underlying assumption in this approach is that the user ratings can be treated as ground truth of the user's taste. However, users are inconsistent in giving their feedback, thus introducing an unknown amount of noise that challenges the validity of this assumption. In this paper, we tackle the problem of analyzing and characterizing the noise in user feedback through ratings of movies. We present a user study aimed at quantifying the noise in user ratings that is due to inconsistencies. We measure RMSE values that range from 0.557 to 0.8156. We also analyze how factors such as item sorting and time of rating affect this noise.",
"A common approach to designing Recommender Systems (RS) consists of asking users to explicitly rate items in order to collect feedback about their preferences. However, users have been shown to be inconsistent and to introduce a non-negligible amount of natural noise in their ratings that affects the accuracy of the predictions. In this paper, we present a novel approach to improve RS accuracy by reducing the natural noise in the input data via a preprocessing step. In order to quantitatively understand the impact of natural noise, we first analyze the response of common recommendation algorithms to this noise. Next, we propose a novel algorithm to denoise existing datasets by means of re-rating: i.e. by asking users to rate previously rated items again. This denoising step yields very significant accuracy improvements. However, re-rating all items in the original dataset is unpractical. Therefore, we study the accuracy gains obtained when re-rating only some of the ratings.In particular, we propose two partial denoising strategies: data and user-dependent denoising. Finally, we compare the value of adding a rating of an unseen item vs. re-rating an item. We conclude with a proposal for RS to improve the quality of their user data and hence their accuracy: asking users to re-rate items might, in some circumstances, be more beneficial than asking users to rate unseen items.",
""
]
} |
1702.08675 | 2951418687 | We desgin a novel fully convolutional network architecture for shapes, denoted by Shape Fully Convolutional Networks (SFCN). 3D shapes are represented as graph structures in the SFCN architecture, based on novel graph convolution and pooling operations, which are similar to convolution and pooling operations used on images. Meanwhile, to build our SFCN architecture in the original image segmentation fully convolutional network (FCN) architecture, we also design and implement a generating operation with bridging function. This ensures that the convolution and pooling operation we have designed can be successfully applied in the original FCN architecture. In this paper, we also present a new shape segmentation approach based on SFCN. Furthermore, we allow more general and challenging input, such as mixed datasets of different categories of shapes which can prove the ability of our generalisation. In our approach, SFCNs are trained triangles-to-triangles by using three low-level geometric features as input. Finally, the feature voting-based multi-label graph cuts is adopted to optimise the segmentation results obtained by SFCN prediction. The experiment results show that our method can effectively learn and predict mixed shape datasets of either similar or different characteristics, and achieve excellent segmentation results. | When using the supervised method to train a collection of labelled 3D shapes, an advanced machine learning approach is used to complete the related training @cite_7 @cite_4 @cite_23 @cite_29 @cite_11 . For example, @cite_7 used Conditional Random Fields (CRF) to model and learn the sample example, in order to realize the component segmentation and labelling of 3D mesh shapes. @cite_11 first projected 3D shapes to 2D space, and the labelling results in 2D space were then projected back to 3D shapes for mesh labelling. @cite_23 use Extreme Learning Machines (ELM), which can be used to achieve consistent segmentation for unknown shapes. @cite_4 @cite_49 applied deep architectures to produce the mesh segmentation. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_29",
"@cite_23",
"@cite_49",
"@cite_11"
],
"mid": [
"2254644702",
"2106723645",
"2003940193",
"2027069615",
"2565662353",
"2022922662"
],
"abstract": [
"This article presents a novel approach for 3D mesh labeling by using deep Convolutional Neural Networks (CNNs). Many previous methods on 3D mesh labeling achieve impressive performances by using predefined geometric features. However, the generalization abilities of such low-level features, which are heuristically designed to process specific meshes, are often insufficient to handle all types of meshes. To address this problem, we propose to learn a robust mesh representation that can adapt to various 3D meshes by using CNNs. In our approach, CNNs are first trained in a supervised manner by using a large pool of classical geometric features. In the training process, these low-level features are nonlinearly combined and hierarchically compressed to generate a compact and effective representation for each triangle on the mesh. Based on the trained CNNs and the mesh representations, a label vector is initialized for each triangle to indicate its probabilities of belonging to various object parts. Eventually, a graph-based mesh-labeling algorithm is adopted to optimize the labels of triangles by considering the label consistencies. Experimental results on several public benchmarks show that the proposed approach is robust for various 3D meshes, and outperforms state-of-the-art approaches as well as classic learning algorithms in recognizing mesh labels.",
"This paper presents a data-driven approach to simultaneous segmentation and labeling of parts in 3D meshes. An objective function is formulated as a Conditional Random Field model, with terms assessing the consistency of faces with labels, and terms between labels of neighboring faces. The objective function is learned from a collection of labeled training meshes. The algorithm uses hundreds of geometric and contextual label features and learns different types of segmentations for different tasks, without requiring manual parameter tuning. Our algorithm achieves a significant improvement in results over the state-of-the-art when evaluated on the Princeton Segmentation Benchmark, often producing segmentations and labelings comparable to those produced by humans.",
"Unsupervised co-analysis of a set of shapes is a difficult problem since the geometry of the shapes alone cannot always fully describe the semantics of the shape parts. In this paper, we propose a semi-supervised learning method where the user actively assists in the co-analysis by iteratively providing inputs that progressively constrain the system. We introduce a novel constrained clustering method based on a spring system which embeds elements to better respect their inter-distances in feature space together with the user-given set of constraints. We also present an active learning method that suggests to the user where his input is likely to be the most effective in refining the results. We show that each single pair of constraints affects many relations across the set. Thus, the method requires only a sparse set of constraints to quickly converge toward a consistent and error-free semantic labeling of the set.",
"We propose a fast method for 3D shape segmentation and labeling via Extreme Learning Machine (ELM). Given a set of example shapes with labeled segmentation, we train an ELM classifier and use it to produce initial segmentation for test shapes. Based on the initial segmentation, we compute the final smooth segmentation through a graph-cut optimization constrained by the super-face boundaries obtained by over-segmentation and the active contours computed from ELM segmentation. Experimental results show that our method achieves comparable results against the state-of-the-arts, but reduces the training time by approximately two orders of magnitude, both for face-level and super-face-level, making it scale well for large datasets. Based on such notable improvement, we demonstrate the application of our method for fast online sequential learning for 3D shape segmentation at face level, as well as realtime sequential learning at super-face level.",
"This paper introduces a deep architecture for segmenting 3D objects into their labeled semantic parts. Our architecture combines image-based Fully Convolutional Networks (FCNs) and surface-based Conditional Random Fields (CRFs) to yield coherent segmentations of 3D shapes. The image-based FCNs are used for efficient view-based reasoning about 3D object parts. Through a special projection layer, FCN outputs are effectively aggregated across multiple views and scales, then are projected onto the 3D object surfaces. Finally, a surface-based CRF combines the projected outputs with geometric consistency cues to yield coherent segmentations. The whole architecture (multi-view FCNs and CRF) is trained end-to-end. Our approach significantly outperforms the existing state-of-the-art methods in the currently largest segmentation benchmark (ShapeNet). Finally, we demonstrate promising segmentation results on noisy 3D shapes acquired from consumer-grade depth cameras.",
"We introduce projective analysis for semantic segmentation and labeling of 3D shapes. The analysis treats an input 3D shape as a collection of 2D projections, labels each projection by transferring knowledge from existing labeled images, and back-projects and fuses the labelings on the 3D shape. The image-space analysis involves matching projected binary images of 3D objects based on a novel bi-class Hausdorff distance. The distance is topology-aware by accounting for internal holes in the 2D figures and it is applied to piecewise-linearly warped object projections to compensate for part scaling and view discrepancies. Projective analysis simplifies the processing task by working in a lower-dimensional space, circumvents the requirement of having complete and well-modeled 3D shapes, and addresses the data challenge for 3D shape analysis by leveraging the massive available image data. A large and dense labeled set ensures that the labeling of a given projected image can be inferred from closely matched labeled images. We demonstrate semantic labeling of imperfect (e.g., incomplete or self-intersecting) 3D models which would be otherwise difficult to analyze without taking the projective analysis approach."
]
} |
1702.08675 | 2951418687 | We desgin a novel fully convolutional network architecture for shapes, denoted by Shape Fully Convolutional Networks (SFCN). 3D shapes are represented as graph structures in the SFCN architecture, based on novel graph convolution and pooling operations, which are similar to convolution and pooling operations used on images. Meanwhile, to build our SFCN architecture in the original image segmentation fully convolutional network (FCN) architecture, we also design and implement a generating operation with bridging function. This ensures that the convolution and pooling operation we have designed can be successfully applied in the original FCN architecture. In this paper, we also present a new shape segmentation approach based on SFCN. Furthermore, we allow more general and challenging input, such as mixed datasets of different categories of shapes which can prove the ability of our generalisation. In our approach, SFCNs are trained triangles-to-triangles by using three low-level geometric features as input. Finally, the feature voting-based multi-label graph cuts is adopted to optimise the segmentation results obtained by SFCN prediction. The experiment results show that our method can effectively learn and predict mixed shape datasets of either similar or different characteristics, and achieve excellent segmentation results. | A lot of research methods can build a segmentation model @cite_48 @cite_2 @cite_20 @cite_0 @cite_45 and achieve joint segmentation without any label information. There are predominantly two unsupervised learning methods: matching and clustering. Using the matching method, the matching relation between pair 3D shapes is obtained based on the similarity of a relative unit given by correlation calculation. The segmentation shape of matching shape is then established to realize the joint segmentation @cite_27 @cite_48 of 3D shapes. By contrast, clustering methods analyze all the 3D shapes in the model set and cluster the consistent correlation units of 3D shapes into the same class. A segmentation model is then obtained and applied to consistent segmentation @cite_20 @cite_0 @cite_45 . | {
"cite_N": [
"@cite_48",
"@cite_0",
"@cite_27",
"@cite_45",
"@cite_2",
"@cite_20"
],
"mid": [
"2029524207",
"2103837466",
"2148290998",
"2042239476",
"2106210044",
"2053008628"
],
"abstract": [
"We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques.",
"This paper presents an unsupervised algorithm for co-segmentation of a set of 3D shapes of the same family. Taking the over-segmentation results as input, our approach clusters the primitive patches to generate an initial guess. Then, it iteratively builds a statistical model to describe each cluster of parts from the previous estimation, and employs the multi-label optimization to improve the co-segmentation results. In contrast to the existing ''one-shot'' algorithms, our method is superior in that it can improve the co-segmentation results automatically. The experimental results on the Princeton Segmentation Benchmark demonstrate that our approach is able to co-segment 3D shapes with significant variability and achieves comparable performance to the existing supervised algorithms and better performance than the unsupervised ones.",
"Following the increasing demand to make the creation and manipulation of 3D geometry simpler and more accessible, we introduce a modeling approach that allows even novice users to create sophisticated models in minutes. Our approach is based on the observation that in many modeling settings users create models which belong to a small set of model classes, such as humans or quadrupeds. The models within each class typically share a common component structure. Following this observation, we introduce a modeling system which utilizes this common component structure allowing users to create new models by shuffling interchangeable components between existing models. To enable shuffling, we develop a method for computing a compatible segmentation of input models into meaningful, interchangeable components. Using this segmentation our system lets users create new models with a few mouse clicks, in a fraction of the time required by previous composition techniques. We demonstrate that the shuffling paradigm allows for easy and fast creation of a rich geometric content.",
"We perform co-analysis of a set of man-made 3D objects to allow the creation of novel instances derived from the set. We analyze the objects at the part level and treat the anisotropic part scales as a shape style. The co-analysis then allows style transfer to synthesize new objects. The key to co-analysis is part correspondence, where a major challenge is the handling of large style variations and diverse geometric content in the shape set. We propose style-content separation as a means to address this challenge. Specifically, we define a correspondence-free style signature for style clustering. We show that confining analysis to within a style cluster facilitates tasks such as co-segmentation, content classification, and deformation-driven part correspondence. With part correspondence between each pair of shapes in the set, style transfer can be easily performed. We demonstrate our analysis and synthesis results on several sets of man-made objects with style and content variations.",
"We introduce an algorithm for unsupervised co-segmentation of a set of shapes so as to reveal the semantic shape parts and establish their correspondence across the set. The input set may exhibit significant shape variability where the shapes do not admit proper spatial alignment and the corresponding parts in any pair of shapes may be geometrically dissimilar. Our algorithm can handle such challenging input sets since, first, we perform co-analysis in a descriptor space, where a combination of shape descriptors relates the parts independently of their pose, location, and cardinality. Secondly, we exploit a key enabling feature of the input set, namely, dissimilar parts may be \"linked\" through third-parties present in the set. The links are derived from the pairwise similarities between the parts' descriptors. To reveal such linkages, which may manifest themselves as anisotropic and non-linear structures in the descriptor space, we perform spectral clustering with the aid of diffusion maps. We show that with our approach, we are able to co-segment sets of shapes that possess significant variability, achieving results that are close to those of a supervised approach.",
"We present a novel algorithm for automatically co-segmenting a set of shapes from a common family into consistent parts. Starting from over-segmentations of shapes, our approach generates the segmentations by grouping the primitive patches of the shapes directly and obtains their correspondences simultaneously. The core of the algorithm is to compute an affinity matrix where each entry encodes the similarity between two patches, which is measured based on the geometric features of patches. Instead of concatenating the different features into one feature descriptor, we formulate co-segmentation into a subspace clustering problem in multiple feature spaces. Specifically, to fuse multiple features, we propose a new formulation of optimization with a consistent penalty, which facilitates both the identification of most similar patches and selection of master features for two similar patches. Therefore the affinity matrices for various features are sparsity-consistent and the similarity between a pair of patches may be determined by part of (instead of all) features. Experimental results have shown how our algorithm jointly extracts consistent parts across the collection in a good manner. © 2012 Wiley Periodicals, Inc."
]
} |
1702.08653 | 2625993720 | We introduce a new paradigm of learning for reasoning, understanding, and prediction, as well as the scaffolding network to implement this paradigm. The scaffolding network embodies an incremental learning approach that is formulated as a teacher-student network architecture to teach machines how to understand text and do reasoning. The key to our computational scaffolding approach is the interactions between the teacher and the student through sequential questioning. The student observes each sentence in the text incrementally, and it uses an attention-based neural net to discover and register the key information in relation to its current memory. Meanwhile, the teacher asks questions about the observed text, and the student network gets rewarded by correctly answering these questions. The entire network is updated continually using reinforcement learning. Our experimental results on synthetic and real datasets show that the scaffolding network not only outperforms state-of-the-art methods but also learns to do reasoning in a scalable way even with little human generated input. | Another inspiration of this work is from the incremental learning framework in which the input text is received in sequences of sentences and encoded separately. While @cite_15 uses an Markov Decision Process (MDP) framework to learn when to stop reading to answer a focused question, @cite_20 builds an incremental classifier for a trivia game to decide whether additional features are needed. The closest to our work is the Recurrent Entity Networks @cite_4 , in which the authors try to learn an internal state representation of each sentence in a sequential order and store in memory by parallel recurrent units with tied weights using gating functions. Although our network's structure has similarities, they use supervised objective while we view the task as an MDP @cite_18 , in which our agent choses its actions to maximize its rewards towards learning to encode text by answering teacher's questions. | {
"cite_N": [
"@cite_18",
"@cite_15",
"@cite_4",
"@cite_20"
],
"mid": [
"1789842862",
"2403702038",
"2951976932",
""
],
"abstract": [
"This paper addresses cost sensitive classi cation in the setting where there are costs for measuring each attribute as well as costs for misclassi cation errors We show how to formulate this as a Markov Decision Pro cess in which the transition model is learned from the training data Speci cally we as sume a set of training examples in which all attributes and the true class have been measured We describe a learning algorithm based on the AO heuristic search procedure that searches for the classi cation policy with minimum expected cost We provide an ad missible heuristic for AO that substantially reduces the number of nodes that need to be expanded particularly when attribute mea surement costs are high To further prune the search space we introduce a statistical prun ing heuristic based on the principle that if the values of two policies are statistically in distinguishable on the training data then we can prune one of the policies from the AO search space Experiments with realis tic and synthetic data demonstrate that these heuristics can substantially reduce the mem ory needed for AO search without signi cantly a ecting the quality of the learned pol icy Hence these heuristics expand the range of cost sensitive learning problems for which AO is feasible",
"Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (, 2014a). We show similar result patterns on data extracted from an online concierge service.",
"We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (, 2015). Like a Neural Turing Machine or Differentiable Neural Computer (, 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children's Book Test, where it obtains competitive performance, reading the story in a single pass.",
""
]
} |
1702.08895 | 2594091765 | This paper presents minimax rates for density estimation when the data dimension @math is allowed to grow with the number of observations @math rather than remaining fixed as in previous analyses. We prove a non-asymptotic lower bound which gives the worst-case rate over standard classes of smooth densities, and we show that kernel density estimators achieve this rate. We also give oracle choices for the bandwidth and derive the fastest rate @math can grow with @math to maintain estimation consistency. | Unlike in the density estimation setting, there are some related results in the information theory literature which endeavor to address the limits of estimation under the triangular array. Essentially, this work examines the estimation of the joint distribution of a @math -block of random variables observed in sequence from an ergodic process supported on a finite set of points. show that if @math grows like @math , then these joint distributions can be estimated consistently. An extension of these results to the case of a Markov random field embedded in a higher dimension is given by @cite_0 . Our results are slightly slower than these (see cor:d-rate ), but estimating continuous densities rather than finitely supported distributions is more difficult. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2039898347"
],
"abstract": [
"The paper deals with upper and lower bounds for the quality of (probability) density estimation. Connections are established between these problems and the theory of approximation of functions. Particularly, it is demonstrated how some of Kolmogorov's concepts work"
]
} |
1702.08811 | 2592141621 | The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the discrepancy between domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit order-wise matching of higher order moments has not been considered before. We propose to match the higher order central moments of probability distributions by means of order-wise moment differences. Our model does not require computationally expensive distance and kernel matrix computations. We utilize the equivalent representation of probability distributions by moment sequences to define a new distance function, called Central Moment Discrepancy (CMD). We prove that CMD is a metric on the set of probability distributions on a compact interval. We further prove that convergence of probability distributions on compact intervals w.r.t. the new metric implies convergence in distribution of the respective random variables. We test our approach on two different benchmark data sets for object recognition (Office) and sentiment analysis of product reviews (Amazon reviews). CMD achieves a new state-of-the-art performance on most domain adaptation tasks of Office and outperforms networks trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach is stable w.r.t. parameter changes in a certain interval. The source code of the experiments is publicly available. | Recently, several measures @math for objective ) have been proposed. One approach is the Proxy @math - distance , given by @math , where @math is the generalization error on the problem of discriminating between source and target samples . @cite_9 compute the value @math with a neural network classifier that is simultaneously trained with the original network by means of a gradient reversal layer. They call their approach domain-adversarial neural networks . Unfortunately, a new classifier has to be trained in this approach including the need of new parameters, additional computation times and validation procedures. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1731081199"
],
"abstract": [
"We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application."
]
} |
1702.08811 | 2592141621 | The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the discrepancy between domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit order-wise matching of higher order moments has not been considered before. We propose to match the higher order central moments of probability distributions by means of order-wise moment differences. Our model does not require computationally expensive distance and kernel matrix computations. We utilize the equivalent representation of probability distributions by moment sequences to define a new distance function, called Central Moment Discrepancy (CMD). We prove that CMD is a metric on the set of probability distributions on a compact interval. We further prove that convergence of probability distributions on compact intervals w.r.t. the new metric implies convergence in distribution of the respective random variables. We test our approach on two different benchmark data sets for object recognition (Office) and sentiment analysis of product reviews (Amazon reviews). CMD achieves a new state-of-the-art performance on most domain adaptation tasks of Office and outperforms networks trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach is stable w.r.t. parameter changes in a certain interval. The source code of the experiments is publicly available. | The two approaches MMD and the Proxy @math - distance have in common that they do not minimize the domain discrepancy explicitly in the hidden activation space. In contrast, the authors in @cite_5 do so by minimizing a modified version of the Kullback-Leibler divergence of the mean activations (MKL). That is, for samples @math , with @math being the @math coordinate of the empirical expectation @math . This approach is fast to compute and has an explicit interpretation in the activation space. Our empirical observations (section Experiments ) show that minimizing the distance between only the first moment (mean) of the activation distributions can be improved by also minimizing the distance between higher order moments. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2261310161"
],
"abstract": [
"Transfer learning has attracted a lot of attention in the past decade. One crucial research issue in transfer learning is how to find a good representation for instances of different domains such that the divergence between domains can be reduced with the new representation. Recently, deep learning has been proposed to learn more robust or higherlevel features for transfer learning. However, to the best of our knowledge, most of the previous approaches neither minimize the difference between domains explicitly nor encode label information in learning the representation. In this paper, we propose a supervised representation learning method based on deep autoencoders for transfer learning. The proposed deep autoencoder consists of two encoding layers: an embedding layer and a label encoding layer. In the embedding layer, the distance in distributions of the embedded instances between the source and target domains is minimized in terms of KL-Divergence. In the label encoding layer, label information of the source domain is encoded using a softmax regression model. Extensive experiments conducted on three real-world image datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods."
]
} |
1702.08726 | 2955744109 | We introduce Stacked Thompson Bandits (STB) for efficiently generating plans that are likely to satisfy a given bounded temporal logic requirement. STB uses a simulation for evaluation of plans, and takes a Bayesian approach to using the resulting information to guide its search. In particular, we show that stacking multiarmed bandits and using Thompson sampling to guide the action selection process for each bandit enables STB to generate plans that satisfy requirements with a high probability while only searching a fraction of the search space. | Our work on STB is strongly influenced by existing open loop planners. In particular, Cross Entropy Open Loop Planning is an approach for planning in large-scale continuous MDPs @cite_1 . It is however not applicable to discrete domains as STB. Recently, cross entropy planning has been used for searching sequences that satisfy a given temporal logic formula @cite_9 in a continuous motion planning setting. | {
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"1986417217",
"2197494948"
],
"abstract": [
"This paper presents a method for optimal trajectory generation for discrete-time nonlinear systems with linear temporal logic (LTL) task specifications. Our approach is based on recent advances in stochastic optimization algorithms for optimal trajectory generation. These methods rely on estimation of the rare event of sampling optimal trajectories, which is achieved by incrementally improving a sampling distribution so as to minimize the cross-entropy. A key component of these stochastic optimization algorithms is determining whether or not a trajectory is collision-free. We generalize this collision checking to efficiently verify whether or not a trajectory satisfies a LTL formula. Interestingly, this verification can be done in time polynomial in the length of the LTL formula and the trajectory. We also propose a method for efficiently re-using parts of trajectories that only partially satisfy the specification, instead of simply discarding the entire sample. Our approach is demonstrated through numerical experiments involving Dubins car and a generic point-mass model subject to complex temporal logic task specifications.",
"We focus on effective sample-based planning in the face of underactuation, high-dimensionality, drift, discrete system changes, and stochasticity. These are hallmark challenges for important problems, such as humanoid locomotion. In order to ensure broad applicability, we assume domain expertise is minimal and limited to a generative model. In order to make the method responsive, computational costs that scale linearly with the amount of samples taken from the generative model are required. We bring to bear a concrete method that satisfies all these requirements; it is a receding-horizon open-loop planner that employs cross-entropy optimization for policy construction. In simulation, we empirically demonstrate near-optimal decisions in a small domain and effective locomotion in several challenging humanoid control tasks."
]
} |
1702.08726 | 2955744109 | We introduce Stacked Thompson Bandits (STB) for efficiently generating plans that are likely to satisfy a given bounded temporal logic requirement. STB uses a simulation for evaluation of plans, and takes a Bayesian approach to using the resulting information to guide its search. In particular, we show that stacking multiarmed bandits and using Thompson sampling to guide the action selection process for each bandit enables STB to generate plans that satisfy requirements with a high probability while only searching a fraction of the search space. | STB is subtly related to statistical model checking (SMC) @cite_2 @cite_13 , and Bayesian statistical model checking in particular @cite_16 @cite_6 @cite_17 . Here, the setting is to guarantee a minimal required satisfaction probability for a given, fixed sequence of actions. SMC approaches are able to provide such a result, potentially with a quantifiable confidence. STB is not able to provide a quantification of satisfaction probability, as is samples every plan only once. Also, the distributions maintained in the stacked bandits to not provide an accurate absolute quantification due to the sampling nature of search and the corresponding concept drift. On the other hand, STB is able to generate plans with high satisfaction probability, whereas SMC only can determine this probability. In practice, a combination of both approaches is possible and seems useful. | {
"cite_N": [
"@cite_6",
"@cite_2",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"1569248486",
"2007072455",
"339627914",
"2950142800",
""
],
"abstract": [
"Recently, there has been considerable interest in the use of Model Checking for Systems Biology. Unfortunately, the state space of stochastic biological models is often too large for classical Model Checking techniques. For these models, a statistical approach to Model Checking has been shown to be an effective alternative. Extending our earlier work, we present the first algorithm for performing statistical Model Checking using Bayesian Sequential Hypothesis Testing. We show that our Bayesian approach outperforms current statistical Model Checking techniques, which rely on tests from Classical (aka Frequentist) statistics, by requiring fewer system simulations. Another advantage of our approach is the ability to incorporate prior Biological knowledge about the model being verified. We demonstrate our algorithm on a variety of models from the Systems Biology literature and show that it enables faster verification than state-of-the-art techniques, even when no prior knowledge is available.",
"Numerical analysis based on uniformisation and statistical techniques based on sampling and simulation are two distinct approaches for transient analysis of stochastic systems. We compare the two solution techniques when applied to the verification of time-bounded until formulae in the temporal stochastic logic CSL, both theoretically and through empirical evaluation on a set of case studies. Our study differs from most previous comparisons of numerical and statistical approaches in that CSL model checking is a hypothesis-testing problem rather than a parameter-estimation problem. We can therefore rely on highly efficient sequential acceptance sampling tests, which enables statistical solution techniques to quickly return a result with some uncertainty. We also propose a novel combination of the two solution techniques for verifying CSL queries with nested probabilistic operators.",
"We introduce the concept of generalized probabilistic queries in Dynamic Bayesian Networks (DBN) — computing P (φ1|φ2), where φi is a formula in temporal logic encoding an equivalence class of trajectories through the variables of the model. Generalized queries include as special cases traditional query types for DBNs (i.e., filtering, smoothing, prediction, and classification), but can also be used to express inference problems that are either impossible, or impractical to answer using traditional algorithms for inference in DBNs. We then discuss the relationship between answering generalized queries and the Probabilistic Model Checking Problem and introduce two novel algorithms for efficiently estimating P (φ1|φ2) in a Bayesian fashion. Finally, we demonstrate our method by answering generalized queries that arise in the context of critical care medicine. Specifically, we show that our approach can be used to make treatment decisions for a cohort of 1,000 simulated sepsis patients, and that it outperforms Support Vector Machines, Neural Networks, and Random Forests on the same task.",
"Quantitative properties of stochastic systems are usually specified in logics that allow one to compare the measure of executions satisfying certain temporal properties with thresholds. The model checking problem for stochastic systems with respect to such logics is typically solved by a numerical approach that iteratively computes (or approximates) the exact measure of paths satisfying relevant subformulas; the algorithms themselves depend on the class of systems being analyzed as well as the logic used for specifying the properties. Another approach to solve the model checking problem is to the system for finitely many runs, and use to infer whether the samples provide a evidence for the satisfaction or violation of the specification. In this short paper, we survey the statistical approach, and outline its main advantages in terms of efficiency, uniformity, and simplicity.",
""
]
} |
1702.08798 | 2949440669 | Hashing has played a pivotal role in large-scale image retrieval. With the development of Convolutional Neural Network (CNN), hashing learning has shown great promise. But existing methods are mostly tuned for classification, which are not optimized for retrieval tasks, especially for instance-level retrieval. In this study, we propose a novel hashing method for large-scale image retrieval. Considering the difficulty in obtaining labeled datasets for image retrieval task in large scale, we propose a novel CNN-based unsupervised hashing method, namely Unsupervised Triplet Hashing (UTH). The unsupervised hashing network is designed under the following three principles: 1) more discriminative representations for image retrieval; 2) minimum quantization loss between the original real-valued feature descriptors and the learned hash codes; 3) maximum information entropy for the learned hash codes. Extensive experiments on CIFAR-10, MNIST and In-shop datasets have shown that UTH outperforms several state-of-the-art unsupervised hashing methods in terms of retrieval accuracy. | Recently, deep learning has achieved explosive success in pattern recognition including image classification, segmentation and learning-based hashing for fast image retrieval. Guo @cite_26 propose a straightforward CNN-based hashing method, they quantize the activations of a fully connected layer with threshold @math and take the binary result as hash codes. Liong @cite_22 present a framework to learn binary codes by seeking multiple hierarchical non-linear transformations, so that the nonlinear relationship of samples can be well exploited. Xia @cite_18 present a framework to automatically learn a good image representation tailored to hashing as well as a set of hash functions. Yao @cite_23 propose a co-training hashing network by jointly learning projections from image representations to hash codes and classification. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_23"
],
"mid": [
"",
"1756600408",
"1956333070",
"2531549126"
],
"abstract": [
"",
"Along with data on the web increasing dramatically, hashing is becoming more and more popular as a method of approximate nearest neighbor search. Previous supervised hashing methods utilized similarity dissimilarity matrix to get semantic information. But the matrix is not easy to construct for a new dataset. Rather than to reconstruct the matrix, we proposed a straightforward CNN-based hashing method, i.e. binarilizing the activations of a fully connected layer with threshold 0 and taking the binary result as hash codes. This method achieved the best performance on CIFAR-10 and was comparable with the state-of-the-art on MNIST. And our experiments on CIFAR-10 suggested that the signs of activations may carry more information than the relative values of activations between samples, and that the co-adaption between feature extractor and hash functions is important for hashing.",
"In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for large scale visual search. Unlike most existing binary codes learning methods which seek a single linear projection to map each sample into a binary vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the nonlinear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the deep network: 1) the loss between the original real-valued feature descriptor and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) by including one discriminative term into the objective function of DH which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes. Experimental results show the superiority of the proposed approach over the state-of-the-arts.",
"Hashing techniques have been intensively investigated for large scale vision applications. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised hashing methods only construct similarity-preserving hash codes. Observing that semantic structures carry complementary information, we propose the idea of cotraining for hashing, by jointly learning projections from image representations to hash codes and classification. Specifically, a novel deep semantic-preserving and ranking-based hashing (DSRH) architecture is presented, which consists of three components: a deep CNN for learning image representations, a hash stream of a binary mapping layer by evenly dividing the learnt representations into multiple bags and encoding each bag into one hash bit, and a classification stream. Mean-while, our model is learnt under two constraints at the top loss layer of hash stream: a triplet ranking loss and orthogonality constraint. The former aims to preserve the relative similarity ordering in the triplets, while the latter makes different hash bit as independent as possible. We have conducted experiments on CIFAR-10 and NUS-WIDE image benchmarks, demonstrating that our approach can provide superior image search accuracy than other state-of-the-art hashing techniques."
]
} |
1702.08568 | 2592440977 | For years security machine learning research has promised to obviate the need for signature based detection by automatically learning to detect indicators of attack. Unfortunately, this vision hasn't come to fruition: in fact, developing and maintaining today's security machine learning systems can require engineering resources that are comparable to that of signature-based detection systems, due in part to the need to develop and continuously tune the "features" these machine learning systems look at as attacks evolve. Deep learning, a subfield of machine learning, promises to change this by operating on raw input signals and automating the process of feature design and extraction. In this paper we propose the eXpose neural network, which uses a deep learning approach we have developed to take generic, raw short character strings as input (a common case for security inputs, which include artifacts like potentially malicious URLs, file paths, named pipes, named mutexes, and registry keys), and learns to simultaneously extract features and classify using character-level embeddings and convolutional neural network. In addition to completely automating the feature design and extraction process, eXpose outperforms manual feature extraction based baselines on all of the intrusion detection problems we tested it on, yielding a 5 -10 detection rate gain at 0.1 false positive rate compared to these baselines. | A number of previous works in machine learned based behavioral detection of malware is related to automatic classification of individual file paths or registry keys. In general, previous behavioral malware detection methods have focused on making detections on the basis of sequences of observed process or operating system-level events. For example, @cite_24 proposes a logistic regression-based method for detecting malware infections based on n-grams of audit log event observations. Relatedly, @cite_3 proposes to use an anomaly detection approach on sequences of registry accesses to infer whether a host has been compromised. @cite_8 surveys a wide variety of behavioral malware detection techniques, all of which perform manual feature engineering on collections of events to infer whether or not dynamically executed binaries are malicious or benign. | {
"cite_N": [
"@cite_24",
"@cite_3",
"@cite_8"
],
"mid": [
"1985663105",
"1647135810",
"2085807744"
],
"abstract": [
"As antivirus and network intrusion detection systems have increasingly proven insufficient to detect advanced threats, large security operations centers have moved to deploy endpoint-based sensors that provide deeper visibility into low-level events across their enterprises. Unfortunately, for many organizations in government and industry, the installation, maintenance, and resource requirements of these newer solutions pose barriers to adoption and are perceived as risks to organizations' missions. To mitigate this problem we investigated the utility of agentless detection of malicious endpoint behavior, using only the standard built-in Windows audit logging facility as our signal. We found that Windows audit logs, while emitting manageable sized data streams on the endpoints, provide enough information to allow robust detection of malicious behavior. Audit logs provide an effective, low-cost alternative to deploying additional expensive agent-based breach detection systems in many government and industrial settings, and can be used to detect, in our tests, 83 percent of malware samples with a 0.1 false positive rate. They can also supplement already existing host signature-based antivirus solutions, like Kaspersky, Symantec, and McAfee, detecting, in our testing environment, 78 of malware missed by those antivirus systems.",
"We present a host-based intrusion detection system (IDS) for Microsoft Windows. The core of the system is an algorithm that detects attacks on a host machine by looking for anomalous accesses to the Windows Registry. The key idea is to first train a model of normal registry behavior on a windows host, and use this model to detect abnormal registry accesses at run-time. The normal model is trained using clean (attack-free) data. At run-time the model is used to check each access to the registry in real time to determine whether or not the behavior is abnormal and (possibly) corresponds to an attack. The system is effective in detecting the actions of malicious software while maintaining a low rate of false alarms",
"The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naive Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9 , a false positive rate of 2.4 , a precision of 97.3 , and an accuracy of 96.8 . In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently."
]
} |
1702.08568 | 2592440977 | For years security machine learning research has promised to obviate the need for signature based detection by automatically learning to detect indicators of attack. Unfortunately, this vision hasn't come to fruition: in fact, developing and maintaining today's security machine learning systems can require engineering resources that are comparable to that of signature-based detection systems, due in part to the need to develop and continuously tune the "features" these machine learning systems look at as attacks evolve. Deep learning, a subfield of machine learning, promises to change this by operating on raw input signals and automating the process of feature design and extraction. In this paper we propose the eXpose neural network, which uses a deep learning approach we have developed to take generic, raw short character strings as input (a common case for security inputs, which include artifacts like potentially malicious URLs, file paths, named pipes, named mutexes, and registry keys), and learns to simultaneously extract features and classify using character-level embeddings and convolutional neural network. In addition to completely automating the feature design and extraction process, eXpose outperforms manual feature extraction based baselines on all of the intrusion detection problems we tested it on, yielding a 5 -10 detection rate gain at 0.1 false positive rate compared to these baselines. | Unlike individual file and registry writes, identifying malicious URLs is a more studied problem in the security detection literature. Proposed malicious URL detection approaches have tended to either exclusively use URL strings as their input or utilize both URL strings and supplementary information like website registration services, website content, and network reputation @cite_15 . In contrast to work that uses both input URLs and auxiliary information to detect malicious URLs, our work relies solely on URL input strings, making it easier to deploy. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2028223155"
],
"abstract": [
"This article surveys the literature on the detection of phishing attacks. Phishing attacks target vulnerabilities that exist in systems due to the human factor. Many cyber attacks are spread via mechanisms that exploit weaknesses found in end-users, which makes users the weakest element in the security chain. The phishing problem is broad and no single silver-bullet solution exists to mitigate all the vulnerabilities effectively, thus multiple techniques are often implemented to mitigate specific attacks. This paper aims at surveying many of the recently proposed phishing mitigation techniques. A high-level overview of various categories of phishing mitigation techniques is also presented, such as: detection, offensive defense, correction, and prevention, which we belief is critical to present where the phishing detection techniques fit in the overall mitigation process."
]
} |
1702.08568 | 2592440977 | For years security machine learning research has promised to obviate the need for signature based detection by automatically learning to detect indicators of attack. Unfortunately, this vision hasn't come to fruition: in fact, developing and maintaining today's security machine learning systems can require engineering resources that are comparable to that of signature-based detection systems, due in part to the need to develop and continuously tune the "features" these machine learning systems look at as attacks evolve. Deep learning, a subfield of machine learning, promises to change this by operating on raw input signals and automating the process of feature design and extraction. In this paper we propose the eXpose neural network, which uses a deep learning approach we have developed to take generic, raw short character strings as input (a common case for security inputs, which include artifacts like potentially malicious URLs, file paths, named pipes, named mutexes, and registry keys), and learns to simultaneously extract features and classify using character-level embeddings and convolutional neural network. In addition to completely automating the feature design and extraction process, eXpose outperforms manual feature extraction based baselines on all of the intrusion detection problems we tested it on, yielding a 5 -10 detection rate gain at 0.1 false positive rate compared to these baselines. | With respect to the detection mechanism used in previous URL detection work, the simplest proposed approaches have involved blacklists, which can be collected using manual labeling, user feedback, client honeypots, and other heuristics @cite_21 . While blacklists have a very low false positive rate, they are also very brittle and thus cannot generalize to previously unseen URL strings @cite_27 . To address these limitations, statistical approaches, such as machine-learning or similarity based URL detection have been proposed @cite_15 . Unfortunately, manually discovering potentially useful features is time consuming and requires constant adaptation to evolving obfuscation techniques, which limits the achievable accuracy of the detectors. In contrast to work that requires manual feature extraction from URLs to make detections, our work automates this feature extraction process. | {
"cite_N": [
"@cite_27",
"@cite_21",
"@cite_15"
],
"mid": [
"2160353917",
"2057871674",
"2028223155"
],
"abstract": [
"In this paper, we study the eectiveness of phishing blacklists. We used 191 fresh phish that were less than 30 minutes old to conduct two tests on eight anti-phishing toolbars. We found that 63 of the phishing campaigns in our dataset lasted less than two hours. Blacklists were ineective when protecting users initially, as most of them caught less than 20 of phish at hour zero. We also found that blacklists were updated at dierent speeds, and varied in coverage, as 47 - 83 of phish appeared on blacklists 12 hours from the initial test. We found that two tools using heuristics to complement blacklists caught signicantly more phish initially than those using only blacklists. However, it took a long time for phish detected by heuristics to appear on blacklists. Finally, we tested the toolbars on a set of 13,458 legitimate URLs for false positives, and did not nd any instance of mislabeling for either blacklists or heuristics. We present these ndings and discuss ways in which anti-phishing tools can be improved.",
"Detecting malicious URLs is an essential task in network security intelligence. In this paper, we make two new contributions beyond the state-of-the-art methods on malicious URL detection. First, instead of using any pre-defined features or fixed delimiters for feature selection, we propose to dynamically extract lexical patterns from URLs. Our novel model of URL patterns provides new flexibility and capability on capturing malicious URLs algorithmically generated by malicious programs. Second, we develop a new method to mine our novel URL patterns, which are not assembled using any pre-defined items and thus cannot be mined using any existing frequent pattern mining methods. Our extensive empirical study using the real data sets from Fortinet, a leader in the network security industry, clearly shows the effectiveness and efficiency of our approach.",
"This article surveys the literature on the detection of phishing attacks. Phishing attacks target vulnerabilities that exist in systems due to the human factor. Many cyber attacks are spread via mechanisms that exploit weaknesses found in end-users, which makes users the weakest element in the security chain. The phishing problem is broad and no single silver-bullet solution exists to mitigate all the vulnerabilities effectively, thus multiple techniques are often implemented to mitigate specific attacks. This paper aims at surveying many of the recently proposed phishing mitigation techniques. A high-level overview of various categories of phishing mitigation techniques is also presented, such as: detection, offensive defense, correction, and prevention, which we belief is critical to present where the phishing detection techniques fit in the overall mitigation process."
]
} |
1702.08503 | 2951207584 | We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely, Frostig and Singer. The result holds for log-depth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between @math and @math , SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results. | As mentioned earlier, our paper builds on , who developed the association of kernels to NN which we rely on. Several previous papers investigated such associations, but in a more restricted settings (i.e., for less architectures). Some of those papers also provide measure of concentration results, that show that w.h.p. the random initialization of the network's weights is reach enough to approximate the functions in the corresponding kernel space. As a result, these papers provide polynomial time guarantees on the variant of SGD, where only the last layer is trained. We remark that with the exception of @cite_0 , those results apply just to depth-2 networks. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2962939986"
],
"abstract": [
"We develop a general duality between neural networks and compositional kernel Hilbert spaces. We introduce the notion of a computation skeleton, an acyclic graph that succinctly describes both a family of neural networks and a kernel space. Random neural networks are generated from a skeleton through node replication followed by sampling from a normal distribution to assign weights. The kernel space consists of functions that arise by compositions, averaging, and non-linear transformations governed by the skeleton's graph topology and activation functions. We prove that random networks induce representations which approximate the kernel space. In particular, it follows that random weight initialization often yields a favorable starting point for optimization despite the worst-case intractability of training neural networks."
]
} |
1702.08289 | 2952235235 | The search of spanning trees with interesting disjunction properties has led to the introduction of edge-disjoint spanning trees, independent spanning trees and more recently completely independent spanning trees. We group together these notions by defining (i, j)-disjoint spanning trees, where i (j, respectively) is the number of vertices (edges, respectively) that are shared by more than one tree. We illustrate how (i, j)-disjoint spanning trees provide some nuances between the existence of disjoint connected dominating sets and completely independent spanning trees. We prove that determining if there exist two (i, j)-disjoint spanning trees in a graph G is NP-complete, for every two positive integers i and j. Moreover we prove that for square of graphs, k-connected interval graphs, complete graphs and several grids, there exist (i, j)-disjoint spanning trees for interesting values of i and j. | Completely independent spanning trees were introduced by Hasunuma @cite_17 and then have been studied on different classes of graphs, such as underlying graphs of line graphs @cite_17 , maximal planar graphs @cite_1 , Cartesian product of two cycles @cite_26 , complete graphs, complete bipartite and tripartite graphs @cite_27 , variant of hypercubes @cite_4 @cite_16 and chodal rings @cite_18 . Moreover, determining if there exist two completely independent spanning trees in a graph @math is a NP-hard problem @cite_1 . Recently, sufficient conditions inspired by the sufficient conditions for hamiltonicity have been determined in order to guarantee the existence of two completely independent spanning trees: Dirac's condition @cite_23 and Ore's condition @cite_24 . Moreover, Dirac's condition has been generalized to more than two trees @cite_19 @cite_20 @cite_15 and has been independently improved @cite_20 @cite_15 for two trees. Also, a recent paper has studied the problem on the class of @math -trees, for which the authors have proven that there exist at least @math completely independent spanning trees @cite_9 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2162483260",
"",
"2007726983",
"2159421784",
"1963897549",
"2170222571",
"",
"1480721107",
"",
"",
"2418021936",
""
],
"abstract": [
"",
"Let T1, T2, …, Tk be spanning trees in a graph G. If for any two vertices u, v in G, the paths from u to v in T1, T2, …, Tk are pairwise internally disjoint, then T1, T2, …, Tk are completely independent spanning trees in G. Completely independent spanning trees can be applied to fault-tolerant communication problems in interconnection networks. In this article, we show that there are two completely independent spanning trees in any torus network. Besides, we generalize the result for the Cartesian product. In particular, we show that there are two completely independent spanning trees in the Cartesian product of any 2-connected graphs. © 2011 Wiley Periodicals, Inc. NETWORKS, 2012 © 2012 Wiley Periodicals, Inc.",
"",
"",
"Let G be a graph. Let T1, T2, . . . , Tk be spanning trees in G. If for any two vertices u, v in G, the paths from u to v in T1, T2, . . . , Tk are pairwise openly disjoint, then we say that T1, T2, . . . , Tk are completely independent spanning trees in G. In this paper, we show that there are two completely independent spanning trees in any 4-connected maximal planar graph. Our proof induces a linear-time algorithm for finding such trees. Besides, we show that given a graph G, the problem of deciding whether there exist two completely independent spanning trees in G is NP-complete.",
"Abstract Two edge-disjoint spanning trees of a graph G are completely independent if the two paths connecting any two vertices of G in the two trees are internally disjoint. It has been asked whether sufficient conditions for hamiltonian graphs are also sufficient for the existence of two completely independent spanning trees (CISTs). We prove that it is true for the classical Ore-condition. That is, if G is a graph on n vertices in which each pair of non-adjacent vertices have degree-sum at least n , then G has two CISTs. It is known that the line graph of every 4-edge connected graph is hamiltonian. We prove that this is also true for CISTs: the line graph of every 4-edge connected graph has two CISTs. Thomassen conjectured that every 4-connected line graph is hamiltonian. Unfortunately, being 4-connected is not enough for the existence of two CISTs in line graphs. We prove that there are infinitely many 4-connected line graphs that do not have two CISTs.",
"",
"",
"Two spanning trees T1 and T2 of a graph G are completely independent if, for any two vertices u and v, the paths from u to v in T1 and T2 are internally disjoint. In this article, we show two sufficient conditions for the existence of completely independent spanning trees. First, we show that a graph of n vertices has two completely independent spanning trees if the minimum degree of the graph is at least . Then, we prove that the square of a 2-connected graph has two completely independent spanning trees. These conditions are known to be sufficient conditions for Hamiltonian graphs.",
"",
"",
"Completely independent spanning trees (T_1,T_2, ,T_k ) in a graph G are spanning trees in G such that for any pair of distinct vertices u and v, the k paths in the spanning trees between u and v mutually have no common edge and no common vertex except for u and v. The concept finds applications in fault-tolerant communication problems in a network. Recently, it was shown that Dirac’s condition for a graph to be hamiltonian is also a sufficient condition for a graph to have two completely independent spanning trees. In this paper, we generalize this result to three or more completely independent spanning trees. Namely, we show that for any graph G with (n 7 ) vertices, if the minimum degree of a vertex in G is at least (n-k ), where (3 k n 2 ), then there are ( n k ) completely independent spanning trees in G. Besides, we improve the lower bound of ( n 2 ) on the Dirac’s condition for completely independent spanning trees to ( n-1 2 ) except for some specific graph. Our results are theoretical ones, since these minimum degree conditions can be applied only to a very dense graph. We then present constructions of symmetric regular graphs which include optimal graphs with respect to the number of completely independent spanning trees.",
""
]
} |
1702.08289 | 2952235235 | The search of spanning trees with interesting disjunction properties has led to the introduction of edge-disjoint spanning trees, independent spanning trees and more recently completely independent spanning trees. We group together these notions by defining (i, j)-disjoint spanning trees, where i (j, respectively) is the number of vertices (edges, respectively) that are shared by more than one tree. We illustrate how (i, j)-disjoint spanning trees provide some nuances between the existence of disjoint connected dominating sets and completely independent spanning trees. We prove that determining if there exist two (i, j)-disjoint spanning trees in a graph G is NP-complete, for every two positive integers i and j. Moreover we prove that for square of graphs, k-connected interval graphs, complete graphs and several grids, there exist (i, j)-disjoint spanning trees for interesting values of i and j. | Some subsets of vertices @math of a graph @math are @math if @math are pairwise disjoint and each subset is a connected dominating set in @math . There are some works about disjoint connected dominating sets that can be transcribed in terms of internally vertex-disjoint spanning trees (the disjoint connected dominating sets can be used to provide the inner vertices of internally vertex-disjoint spanning trees). The maximum number of disjoint connected dominating sets in a graph @math is the . This parameter is denoted by @math and has been introduced by Hedetniemi and Laskar @cite_0 in 1984. An interesting result about connected domatic number concerns planar graphs, for which Hartnell and Rall have proven that, except @math (which has connected domatic number @math ), their connected domatic number is bounded by 3 @cite_6 . The problem of constructing a connected dominating set is often motivated by wireless ad-hoc networks @cite_7 @cite_3 for which connected dominating sets are used to create a virtual backbone in the network. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_7",
"@cite_6"
],
"mid": [
"202326759",
"",
"2169848785",
"1495272862"
],
"abstract": [
"In a data processing system comprising a plurality of systems each including a plurality of console type typewriters for establishing communications pertaining to data processing operations between an operator and the system, the communication information which is entered into and derived from a central processing unit in each system by the console type typewriter is stored in a character buffer unit corresponding to the console type typewriter in a monitor transfer control unit, and a buffer scanning unit scans the character buffers to transfer the contents therein into a statistical analyzer or processing unit so that communication information may be automatically analyzed and summarized to obtain the data per day or month required for determining whether or not the data processing system has been effectively operated.",
"",
"The dominating set problem in graphs asks for a minimum size subset of vertices with the following property: each vertex is required to be either in the dominating set, or adjacent to some vertex in the dominating set. We focus on the related question of finding a connected dominating set of minimum size, where the graph induced by vertices in the dominating set is required to be connected as well. This problem arises in network testing, as well as in wireless communication. Two polynomial time algorithms that achieve approximation factors of 2H(Δ)+2 and H(Δ)+2 are presented, where Δ is the maximum degree and H is the harmonic function. This question also arises in relation to the traveling tourist problem, where one is looking for the shortest tour such that each vertex is either visited or has at least one of its neighbors visited. We also consider a generalization of the problem to the weighted case, and give an algorithm with an approximation factor of (cn+1) n where cn ln k is the approximation factor for the node weighted Steiner tree problem (currently cn = 1.6103 ). We also consider the more general problem of finding a connected dominating set of a specified subset of vertices and provide a polynomial time algorithm with a (c+1) H(Δ) +c-1 approximation factor, where c is the Steiner approximation ratio for graphs (currently c = 1.644 ).",
"A dominating set in a graph G is a connected dominating set of G if it induces a connected subgraph of G. The connected domatic number of G is the maximum number of pairwise disjoint, connected dominating sets in V(G). We establish a sharp lower bound on the number of edges in a connected graph with a given order and given connected domatic number. We also show that a planar graph has connected domatic number at most 4 and give a characterization of planar graphs having connected domatic number 3."
]
} |
1702.08398 | 2594154005 | We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions. | c) : Building on the pioneering work of @cite_4 , @cite_17 suggested to learn the discriminator with the binary cross entropy criterium of GAN while learning the generator with @math mean feature matching. The main difference of our IPM @math GAN is that both discriminator" and generator" are learned using the mean feature matching criterium, with additional constraints on @math . | {
"cite_N": [
"@cite_4",
"@cite_17"
],
"mid": [
"2125389028",
"2432004435"
],
"abstract": [
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes."
]
} |
1702.08334 | 2592379963 | This paper considers a class of reinforcement-learning that belongs to the family of Learning Automata and provides a stochastic-stability analysis in strategic-form games. For this class of dynamics, convergence to pure Nash equilibria has been demonstrated only for the fine class of potential games. Prior work primarily provides convergence properties of the dynamics through stochastic approximations, where the asymptotic behavior can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through the ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the applicabity of these algorithms to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing stochastic-stability that is based upon an explicit characterization of the (unique) invariant probability measure of the induced Markov chain. | In prior reinforcement learning in games, analysis has been restricted to decreasing step-size sequences @math and @math . More specifically, in @cite_6 , the step-size sequence of agent @math is @math for some positive constant @math and for @math (in the place of the constant step size @math of )). A comparative model is also used by @cite_0 , with @math , where @math is the accumulated benefits of agent @math up to time @math which gives rise to an urn process @cite_5 . Some similarities are also shared with the Cross' learning model of @cite_15 , where @math and @math , and its modification presented in @cite_1 , where @math , instead, is assumed decreasing. | {
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_15"
],
"mid": [
"",
"2030329396",
"2096702014",
"1564229172",
"1995622844"
],
"abstract": [
"",
"This paper explores the idea of constructing theoretical economic agents that behave like actual human agents and using them in neoclassical economic models. It does this in a repeated-choice setting by postulating \"artificial agents\" who use a learning algorithm calibrated against human learning data from psychological experiments. The resulting calibrated algorithm appears to replicate human learning behavior to a high degree and reproduces several \"stylized facts\" of learning. It can, therefore, be used to replace the idealized, perfectly rational agents in appropriate neoclassical models with \"calibrated agents\" that represent actual human behavior. The paper discusses the possibilities of using the algorithm to represent human learning in normal-form stage games and in more general neoclassical models in economics. It explores the likelihood of convergence to long-run optimality and to Nash behavior, and the \"characteristic learning time\" implicit in human adaptation in the economy.",
"This paper investigates the properties of the most common form of reinforcement learning (the \"basic model\" of Erev and Roth, American Economic Review, 88, 848-881, 1998). Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics.",
"The authors examine learning in all experiments they could locate involving one hundred periods or more of games with a unique equilibrium in mixed strategies, and in a new experiment. They study both the ex post ('best fit') descriptive power of learning models, and their ex ante predictive power, by simulating each experiment using parameters estimated from the other experiments. Even a one-parameter reinforcement learning model robustly outperforms the equilibrium predictions. Predictive power is improved by adding 'forgetting' and 'experimentation,' or by allowing greater rationality as in probabilistic fictitious play. Implications for developing a low-rationality, cognitive game theory are discussed. Copyright 1998 by American Economic Association.",
"This paper considers a version of Bush and Mosteller's stochastic learning theory in the context of games. We compare this model of learning to a model of biological evolution. The purpose is to investigate analogies between learning and evolution. We and that in the continuous time limit the biological model coincides with the deterministic, continuous time replicator process. We give conditions under which the same is true for the learning model. For the case that these conditions do not hold, we show that the replicator process continues to play an important role in characterising the continuous time limit of the learning model, but that a di®erent e®ect ( Matching\") enters as well.(This abstract was borrowed from another version of this item.)"
]
} |
1702.08334 | 2592379963 | This paper considers a class of reinforcement-learning that belongs to the family of Learning Automata and provides a stochastic-stability analysis in strategic-form games. For this class of dynamics, convergence to pure Nash equilibria has been demonstrated only for the fine class of potential games. Prior work primarily provides convergence properties of the dynamics through stochastic approximations, where the asymptotic behavior can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through the ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the applicabity of these algorithms to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing stochastic-stability that is based upon an explicit characterization of the (unique) invariant probability measure of the induced Markov chain. | This issue has also been raised by @cite_19 @cite_0 . Reference @cite_19 considered the model by @cite_6 and showed that convergence to non-Nash pure strategy profiles can be excluded as long as @math for all @math and @math . On the other hand, convergence to non-Nash action profiles was not an issue with the urn model of @cite_5 (as analyzed in @cite_0 ). However, the use of an urn-process type step-size sequence significantly reduces the applicability of the reinforcement learning scheme. In conclusion, the perturbation parameter @math may serve as a design tool for reinforcing convergence to Nash equilibria without necessarily employing an urn-process type step-size sequence. For engineering applications this is a desirable feature. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_6"
],
"mid": [
"2096702014",
"1989107041",
"1564229172",
"2030329396"
],
"abstract": [
"This paper investigates the properties of the most common form of reinforcement learning (the \"basic model\" of Erev and Roth, American Economic Review, 88, 848-881, 1998). Stochastic approximation theory has been used to analyse the local stability of fixed points under this learning process. However, as we show, when such points are on the boundary of the state space, for example, pure strategy equilibria, standard results from the theory of stochastic approximation do not apply. We offer what we believe to be the correct treatment of boundary points, and provide a new and more general result: this model of learning converges with zero probability to fixed points which are unstable under the Maynard Smith or adjusted version of the evolutionary replicator dynamics. For two player games these are the fixed points that are linearly unstable under the standard replicator dynamics.",
"In this paper we study a stochastic learning model for 2, 2 normal form games that are played repeatedly. The main emphasis is put on the emergence of cycles. We assume that the players have neither information about the payoff matrix of their opponent nor about their own. At every round each player can only observe his or her action and the payoff he or she receives. We prove that the learning algorithm, which is modeled by an urn scheme proposed by Arthur (1993), leads with positive probability to a cycling of strategy profiles if the game has a mixed Nash equilibrium. In case there are strict Nash equilibria, the learning process converges a.s. to the set of Nash equilibria.",
"The authors examine learning in all experiments they could locate involving one hundred periods or more of games with a unique equilibrium in mixed strategies, and in a new experiment. They study both the ex post ('best fit') descriptive power of learning models, and their ex ante predictive power, by simulating each experiment using parameters estimated from the other experiments. Even a one-parameter reinforcement learning model robustly outperforms the equilibrium predictions. Predictive power is improved by adding 'forgetting' and 'experimentation,' or by allowing greater rationality as in probabilistic fictitious play. Implications for developing a low-rationality, cognitive game theory are discussed. Copyright 1998 by American Economic Association.",
"This paper explores the idea of constructing theoretical economic agents that behave like actual human agents and using them in neoclassical economic models. It does this in a repeated-choice setting by postulating \"artificial agents\" who use a learning algorithm calibrated against human learning data from psychological experiments. The resulting calibrated algorithm appears to replicate human learning behavior to a high degree and reproduces several \"stylized facts\" of learning. It can, therefore, be used to replace the idealized, perfectly rational agents in appropriate neoclassical models with \"calibrated agents\" that represent actual human behavior. The paper discusses the possibilities of using the algorithm to represent human learning in normal-form stage games and in more general neoclassical models in economics. It explores the likelihood of convergence to long-run optimality and to Nash behavior, and the \"characteristic learning time\" implicit in human adaptation in the economy."
]
} |
1702.08334 | 2592379963 | This paper considers a class of reinforcement-learning that belongs to the family of Learning Automata and provides a stochastic-stability analysis in strategic-form games. For this class of dynamics, convergence to pure Nash equilibria has been demonstrated only for the fine class of potential games. Prior work primarily provides convergence properties of the dynamics through stochastic approximations, where the asymptotic behavior can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through the ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the applicabity of these algorithms to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing stochastic-stability that is based upon an explicit characterization of the (unique) invariant probability measure of the induced Markov chain. | Although excluding convergence to non-Nash pure strategies can be guaranteed by using @math , establishing convergence to pure Nash equilibria may still be an issue, since it further requires excluding convergence to mixed strategy profiles. As presented in @cite_2 , this can be guaranteed only under strong conditions in the payoff matrix. For example, as shown in [Proposition 8] ChasparisShammaRantzer15 , excluding convergence to mixed strategy profiles requires a) the existence of a potential function, b) conditions over the second gradient of the potential function. Requiring the existence of a potential function considerably restricts the class of games where equilibrium selection can be described. Furthermore, condition (b) may not easily be verified in games of large number of players or actions. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2051130642"
],
"abstract": [
"For several reinforcement learning models in strategic-form games, convergence to action profiles that are not Nash equilibria may occur with positive probability under certain conditions on the payoff function. In this paper, we explore how an alternative reinforcement learning model, where the strategy of each agent is perturbed by a strategy-dependent perturbation (or mutations) function, may exclude convergence to non-Nash pure strategy profiles. This approach extends prior analysis on reinforcement learning in games that addresses the issue of convergence to saddle boundary points. It further provides a framework under which the effect of mutations can be analyzed in the context of reinforcement learning."
]
} |
1702.08334 | 2592379963 | This paper considers a class of reinforcement-learning that belongs to the family of Learning Automata and provides a stochastic-stability analysis in strategic-form games. For this class of dynamics, convergence to pure Nash equilibria has been demonstrated only for the fine class of potential games. Prior work primarily provides convergence properties of the dynamics through stochastic approximations, where the asymptotic behavior can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through the ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the applicabity of these algorithms to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing stochastic-stability that is based upon an explicit characterization of the (unique) invariant probability measure of the induced Markov chain. | Similar questions of convergence to Nash equilibria also appear in alternative reinforcement learning formulations, such as approximate dynamic programming methodologies and @math -learning. However, this is usually accomplished under a stronger set of assumptions, which increases the computational complexity of the dynamics. For example, the Nash-Q learning algorithm of @cite_3 addresses the problem of maximizing the discounted expected rewards for each agent by updating an approximation of the cost-to-go function (or @math -values). Alternative objectives may be used, such as the minimax criterion of @cite_9 . However, it is indirectly assumed that agents need to have full access to the joint action space and the rewards received by the other agents. | {
"cite_N": [
"@cite_9",
"@cite_3"
],
"mid": [
"261814290",
"2120846115"
],
"abstract": [
"1 Preliminaries.- 1.1 Introduction.- 1.2 Measures and Functions.- 1.3 Weak Topologies.- 1.4 Convergence of Measures.- 1.5 Complements.- 1.6 Notes.- I Markov Chains and Ergodicity.- 2 Markov Chains and Ergodic Theorems.- 2.1 Introduction.- 2.2 Basic Notation and Definitions.- 2.3 Ergodic Theorems.- 2.4 The Ergodicity Property.- 2.5 Pathwise Results.- 2.6 Notes.- 3 Countable Markov Chains.- 3.1 Introduction.- 3.2 Classification of States and Class Properties.- 3.3 Limit Theorems.- 3.4 Notes.- 4 Harris Markov Chains.- 4.1 Introduction.- 4.2 Basic Definitions and Properties.- 4.3 Characterization of Harris recurrence.- 4.4 Sufficient Conditions for P.H.R.- 4.5 Harris and Doeblin Decompositions.- 4.6 Notes.- 5 Markov Chains in Metric Spaces.- 5.1 Introduction.- 5.2 The Limit in Ergodic Theorems.- 5.3 Yosida's Ergodic Decomposition.- 5.4 Pathwise Results.- 5.5 Proofs.- 5.6 Notes.- 6 Classification of Markov Chains via Occupation Measures.- 6.1 Introduction.- 6.2 A Classification.- 6.3 On the Birkhoff Individual Ergodic Theorem.- 6.4 Notes.- II Further Ergodicity Properties.- 7 Feller Markov Chains.- 7.1 Introduction.- 7.2 Weak-and Strong-Feller Markov Chains.- 7.3 Quasi Feller Chains.- 7.4 Notes.- 8 The Poisson Equation.- 8.1 Introduction.- 8.2 The Poisson Equation.- 8.3 Canonical Pairs.- 8.4 The Cesaro-Averages Approach.- 8.5 The Abelian Approach.- 8.6 Notes.- 9 Strong and Uniform Ergodicity.- 9.1 Introduction.- 9.2 Strong and Uniform Ergodicity.- 9.3 Weak and Weak Uniform Ergodicity.- 9.4 Notes.- III Existence and Approximation of Invariant Probability Measures.- 10 Existence of Invariant Probability Measures.- 10.1 Introduction and Statement of the Problems.- 10.2 Notation and Definitions.- 10.3 Existence Results.- 10.4 Markov Chains in Locally Compact Separable Metric Spaces.- 10.5 Other Existence Results in Locally Compact Separable Metric Spaces.- 10.6 Technical Preliminaries.- 10.7 Proofs.- 10.8 Notes.- 11 Existence and Uniqueness of Fixed Points for Markov Operators.- 11.1 Introduction and Statement of the Problems.- 11.2 Notation and Definitions.- 11.3 Existence Results.- 11.4 Proofs.- 11.5 Notes.- 12 Approximation Procedures for Invariant Probability Measures.- 12.1 Introduction.- 12.2 Statement of the Problem and Preliminaries.- 12.3 An Approximation Scheme.- 12.4 A Moment Approach for a Special Class of Markov Chains.- 12.5 Notes.",
"We extend Q-learning to a noncooperative multiagent context, using the framework of general-sum stochastic games. A learning agent maintains Q-functions over joint actions, and performs updates based on assuming Nash equilibrium behavior over the current Q-values. This learning protocol provably converges given certain restrictions on the stage games (defined by Q-values) that arise during learning. Experiments with a pair of two-player grid games suggest that such restrictions on the game structure are not necessarily required. Stage games encountered during learning in both grid environments violate the conditions. However, learning consistently converges in the first grid game, which has a unique equilibrium Q-function, but sometimes fails to converge in the second, which has three different equilibrium Q-functions. In a comparison of offline learning performance in both games, we find agents are more likely to reach a joint optimal path with Nash Q-learning than with a single-agent Q-learning method. When at least one agent adopts Nash Q-learning, the performance of both agents is better than using single-agent Q-learning. We have also implemented an online version of Nash Q-learning that balances exploration with exploitation, yielding improved performance."
]
} |
1702.08334 | 2592379963 | This paper considers a class of reinforcement-learning that belongs to the family of Learning Automata and provides a stochastic-stability analysis in strategic-form games. For this class of dynamics, convergence to pure Nash equilibria has been demonstrated only for the fine class of potential games. Prior work primarily provides convergence properties of the dynamics through stochastic approximations, where the asymptotic behavior can be associated with the limit points of an ordinary-differential equation (ODE). However, analyzing global convergence through the ODE-approximation requires the existence of a Lyapunov or a potential function, which naturally restricts the applicabity of these algorithms to a fine class of games. To overcome these limitations, this paper introduces an alternative framework for analyzing stochastic-stability that is based upon an explicit characterization of the (unique) invariant probability measure of the induced Markov chain. | When the evaluation of the @math -values is totally independent, as in the individual @math -learning in @cite_17 , then convergence to Nash equilibria has been shown only for 2-player zero-sum games and 2-player partnership games with countably many Nash equilibria. Currently, there are no convergent results in games in multi-player games. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1967250398"
],
"abstract": [
"The single-agent multi-armed bandit problem can be solved by an agent that learns the values of each action using reinforcement learning. However, the multi-agent version of the problem, the iterated normal form game, presents a more complex challenge, since the rewards available to each agent depend on the strategies of the others. We consider the behavior of value-based learning agents in this situation, and show that such agents cannot generally play at a Nash equilibrium, although if smooth best responses are used, a Nash distribution can be reached. We introduce a particular value-based learning algorithm, which we call individual Q-learning, and use stochastic approximation to study the asymptotic behavior, showing that strategies will converge to Nash distribution almost surely in 2-player zero-sum games and 2-player partnership games. Player-dependent learning rates are then considered, and it is shown that this extension converges in some games for which many algorithms, including the basic algorithm initially considered, fail to converge."
]
} |
1702.08388 | 2781332141 | Social media and data mining are increasingly being used to analyse political and societ al issues. Here we undertake the classification of social media users as supporting or opposing ongoing independence movements in their territories. Independence movements occur in territories whose citizens have conflicting national identities; users with opposing national identities will then support or oppose the sense of being part of an independent nation that differs from the officially recognised country. We describe a methodology that relies on users’ self-reported location to build large-scale datasets for three territories – Catalonia, the Basque Country and Scotland. An analysis of these datasets shows that homophily plays an important role in determining who people connect with, as users predominantly choose to follow and interact with others from the same national identity. We show that a classifier relying on users’ follow networks can achieve accurate, language-independent classification performances ranging from 85 to 97 for the three territories. | Previous research has suggested that political homophily is also reflected in social media @cite_2 @cite_12 , that is that supporters of one political party are more likely to follow one another than to follow supporters of other parties. Whether this generalises to users with different national identities has not been explored before. | {
"cite_N": [
"@cite_12",
"@cite_2"
],
"mid": [
"2161834943",
"2021715735"
],
"abstract": [
"Parties, candidates, and voters are becoming increasingly engaged in political conversations through the micro-blogging platform Twitter. In this paper I show that the structure of the social networks in which they are embedded has the potential to become a source of information about policy positions. Under the assumption that social networks are homophilic (, 2001), this is, the propensity of users to cluster along partisan lines, I develop a Bayesian Spatial Following model that scales Twitter users along a common ideological dimension based on who they follow. I apply this network-based method to estimate ideal points for Twitter users in the US, the UK, Spain, Italy, and the Netherlands. The resulting positions of the party accounts on Twitter are highly correlated with oine measures based on their voting records and their manifestos. Similarly, this method is able to successfully classify individuals who state their political orientation publicly, and a sample of users from the state of Ohio whose Twitter accounts are matched with their voter registration history. To illustrate the potential contribution of these estimates, I examine the extent to which online behavior is polarized along ideological lines. Using the 2012 US presidential election campaign as a case study, I nd that public exchanges on Twitter take place predominantly among users with similar viewpoints.",
"This study integrates network and content analyses to examine exposure to cross-ideological political views on Twitter. We mapped the Twitter networks of 10 controversial political topics, discovered clusters - subgroups of highly self-connected users - and coded messages and links in them for political orientation. We found that Twitter users are unlikely to be exposed to cross-ideological content from the clusters of users they followed, as these were usually politically homogeneous. Links pointed at grassroots web pages e.g.: blogs more frequently than traditional media websites. Liberal messages, however, were more likely to link to traditional media. Last, we found that more specific topics of controversy had both conservative and liberal clusters, while in broader topics, dominant clusters reflected conservative sentiment."
]
} |
1702.08259 | 2953137443 | Ensembling multiple predictions is a widely used technique for improving the accuracy of various machine learning tasks. One obvious drawback of ensembling is its higher execution cost during inference. In this paper, we first describe our insights on the relationship between the probability of prediction and the effect of ensembling with current deep neural networks; ensembling does not help mispredictions for inputs predicted with a high probability even when there is a non-negligible number of mispredicted inputs. This finding motivated us to develop a way to adaptively control the ensembling. If the prediction for an input reaches a high enough probability, i.e., the output from the softmax function, on the basis of the confidence level, we stop ensembling for this input to avoid wasting computation power. We evaluated the adaptive ensembling by using various datasets and showed that it reduces the computation cost significantly while achieving accuracy similar to that of static ensembling using a pre-defined number of local predictions. We also show that our statistically rigorous confidence-level-based early-exit condition reduces the burden of task-dependent threshold tuning better compared with naive early exit based on a pre-defined threshold in addition to yielding a better accuracy with the same cost. | @cite_11 showed an important observation related to ours. They showed that the large part of the gain of ensembling came from the ensembling of the first few local predictions. Our observation discussed in the previous section enhances Opitz's observation from a different perspective: most gain of the ensembling comes from inputs with low probabilities in the prediction. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2100805904"
],
"abstract": [
"An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier - especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees."
]
} |
1702.08074 | 2594529503 | We consider the task of learning control policies for a robotic mechanism striking a puck in an air hockey game. The control signal is a direct command to the robot's motors. We employ a model free deep reinforcement learning framework to learn the motoric skills of striking the puck accurately in order to score. We propose certain improvements to the standard learning scheme which make the deep Q-learning algorithm feasible when it might otherwise fail. Our improvements include integrating prior knowledge into the learning scheme, and accounting for the changing distribution of samples in the experience replay buffer. Finally we present our simulation results for aimed striking which demonstrate the successful learning of this task, and the improvement in algorithm stability due to the proposed modifications. | Since the groundbreaking results shown by Deep Q-Learning for learning to play games on the Atari 2600 arcade environment, there has been extensive research on deep reinforcement learning. Deep Q-learning in particular seeks to approximate the Q-values @cite_1 using deep networks, such as deep convolutional neural networks. There has also been work on better target estimation @cite_20 , improving the learning by prioritizing the experience replay buffer to maximize learning @cite_4 and preforming better gradient updates with parallel batch approaches @cite_16 @cite_8 . Some work on adaptation to the continuous control domain has been done also by @cite_7 . Policy gradients methods were traditionally used @cite_26 @cite_10 @cite_17 , but struggled as the number of parameters increased. Adaptation to the deep neural network framework has also been done in recent years @cite_23 @cite_2 . Several benchmarks such as @cite_21 have made comparisons between continuous control algorithms. In This paper we focus on the online DQN based approach, and extend it in the domain of continuous state optimal control for striking in air hockey. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2119717200",
"2201581102",
"2173248099",
"",
"2342662072",
"",
"2949608212",
"",
"2260756217",
"",
"2952523895",
""
],
"abstract": [
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"",
"Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at this https URL in order to facilitate experimental reproducibility and to encourage adoption by other researchers.",
"",
"We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.",
"",
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"",
"The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.",
""
]
} |
1702.08139 | 2594538354 | Recent work on generative modeling of text has found that variational auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM language models (, 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder's dilation architecture, we control the effective context from previously generated words. In experiments, we find that there is a trade off between the contextual capacity of the decoder and the amount of encoding information used. We show that with the right decoder, VAE can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive experimental result on the use VAE for generative modeling of text. Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines. | Our work is in line with previous works on combining variational inferences with text modeling @cite_31 @cite_22 @cite_2 @cite_35 @cite_38 . @cite_31 is the first work to combine VAE with language model and they use LSTM as the decoder and find some negative results. On the other hand, @cite_22 models text as bag of words, though improvement has been found, the model can not be used to generate text. Our work fills the gaps between them. @cite_2 @cite_20 applies variational inference to dialogue modeling and machine translation and found some improvement in terms of generated text quality, but no language modeling results are reported. @cite_14 @cite_12 @cite_7 embedded variational units in every step of a RNN, which is different from our model in using global latent variables to learn high level features. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_2",
"@cite_31",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"",
"2157006255",
"2173681125",
"2396566817",
"2399880602",
"2210838531",
"1884859883",
""
],
"abstract": [
"",
"",
"We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"Recent advances in neural variational inference have spawned a renaissance in deep latent variable models. In this paper we introduce a generic variational inference framework for generative and conditional models of text. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. Our neural variational document model combines a continuous stochastic document representation with a bag-of-words generative model and achieves the lowest reported perplexities on two standard test corpora. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair. On two question answering benchmarks this model exceeds all previous published benchmarks.",
"How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over the uncertainty in a latent path, like a state space model, we improve the state of the art results on the Blizzard and TIMIT speech modeling data sets by a large margin, while achieving comparable performances to competing methods on polyphonic music modeling.",
"Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.",
"The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.",
"Leveraging advances in variational inference, we propose to enhance recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs). The model i) can be trained with stochastic gradient methods, ii) allows structured and multi-modal conditionals at each time step, iii) features a reliable estimator of the marginal likelihood and iv) is a generalisation of deterministic recurrent neural networks. We evaluate the method on four polyphonic musical data sets and motion capture data.",
""
]
} |
1702.08139 | 2594538354 | Recent work on generative modeling of text has found that variational auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM language models (, 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder's dilation architecture, we control the effective context from previously generated words. In experiments, we find that there is a trade off between the contextual capacity of the decoder and the amount of encoding information used. We show that with the right decoder, VAE can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive experimental result on the use VAE for generative modeling of text. Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines. | Our use of CNN as decoder is inspired by recent success of PixelCNN model for images @cite_32 , WaveNet for audios @cite_3 , Video Pixel Network for video modeling @cite_29 and ByteNet for machine translation @cite_4 . But in contrast to those works showing using a very deep architecture leads to better performance, CNN as decoder is used in our model to control the contextual capacity, leading to better performance. | {
"cite_N": [
"@cite_29",
"@cite_4",
"@cite_32",
"@cite_3"
],
"mid": [
"2529769424",
"2540404261",
"2423557781",
"2519091744"
],
"abstract": [
"We propose a probabilistic video model, the Video Pixel Network (VPN), that estimates the discrete joint distribution of the raw pixel values in a video. The model and the neural architecture reflect the time, space and color structure of video tensors and encode it as a four-dimensional dependency chain. The VPN approaches the best possible performance on the Moving MNIST benchmark, a leap over the previous state of the art, and the generated videos show only minor deviations from the ground truth. The VPN also produces detailed samples on the action-conditional Robotic Pushing benchmark and generalizes to the motion of novel objects.",
"We present a novel neural network for processing sequences. The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the decoder on top of the encoder and preserving the temporal resolution of the sequences. To address the differing lengths of the source and the target, we introduce an efficient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to increase its receptive field. The resulting network has two core properties: it runs in time that is linear in the length of the sequences and it sidesteps the need for excessive memorization. The ByteNet decoder attains state-of-the-art performance on character-level language modelling and outperforms the previous best results obtained with recurrent networks. The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, surpassing comparable neural translation models that are based on recurrent networks with attentional pooling and run in quadratic time. We find that the latent alignment structure contained in the representations reflects the expected alignment between the tokens.",
"This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
""
]
} |
1702.08272 | 2950305861 | We present a new public dataset with a focus on simulating robotic vision tasks in everyday indoor environments using real imagery. The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes. We train a fast object category detector for instance detection on our data. Using the dataset we show that, although increasingly accurate and fast, the state of the art for object detection is still severely impacted by object scale, occlusion, and viewing direction all of which matter for robotics applications. We next validate the dataset for simulating active vision, and use the dataset to develop and evaluate a deep-network-based system for next best move prediction for object classification using reinforcement learning. Our dataset is available for download at cs.unc.edu ammirato active_vision_dataset_website . | The datasets that have been a driving force in pushing the deep learning revolution in object recognition, Pascal VOC @cite_5 , the ImageNet Challenge @cite_31 , and MS COCO @cite_28 are all collected from web images (usually from Flickr) using web search based on keywords. These image collections introduce biases from the human photographer, the human tagging, and the web search engine. As a result objects are usually of medium to large size in images and are usually frontal views with small amounts of occlusion. In addition these datasets focus on object category recognition. The state of the art for object classification and recognition in these datasets is based on either object proposals and feature pooling following @cite_19 with advanced deep networks @cite_26 @cite_38 or on fully convolutional networks implementing a modern take on sliding windows @cite_33 @cite_3 @cite_17 that provide frame-rate or faster performance on high-end hardware for some reduction in accuracy. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_3",
"@cite_19",
"@cite_5",
"@cite_31",
"@cite_17"
],
"mid": [
"2949650786",
"2102605133",
"2193145675",
"2952122856",
"1483870316",
"2093725709",
"",
"2117539524",
"2523096747"
],
"abstract": [
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is far less likely to predict false detections where nothing exists. Finally, YOLO learns very general representations of objects. It outperforms all other detection methods, including DPM and R-CNN, by a wide margin when generalizing from natural images to artwork on both the Picasso Dataset and the People-Art Dataset.",
"One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose. Most of the work in object detection has been based on single image processing and its performance is limited by occlusions and ambiguity in appearance and geometry. This paper proposes an active approach to object detection by controlling the point of view of a mobile depth camera. When an initial static detection phase identifies an object of interest, several hypotheses are made about its class and orientation. The sensor then plans a sequence of viewpoints, which balances the amount of energy used to move with the chance of identifying the correct hypothesis. We formulate an active M-ary hypothesis testing problem, which includes sensor mobility, and solve it using a point-based approximate POMDP algorithm. The validity of our approach is verified through simulation and experiments with real scenes captured by a kinect sensor. The results suggest a significant improvement over static object detection.",
"",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for slidingwindow detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM."
]
} |
1702.08272 | 2950305861 | We present a new public dataset with a focus on simulating robotic vision tasks in everyday indoor environments using real imagery. The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes. We train a fast object category detector for instance detection on our data. Using the dataset we show that, although increasingly accurate and fast, the state of the art for object detection is still severely impacted by object scale, occlusion, and viewing direction all of which matter for robotics applications. We next validate the dataset for simulating active vision, and use the dataset to develop and evaluate a deep-network-based system for next best move prediction for object classification using reinforcement learning. Our dataset is available for download at cs.unc.edu ammirato active_vision_dataset_website . | Instance recognition (as opposed to object category recognition) has generally been approached using local features or template matching techniques. A recent relevant example using these types of models is @cite_9 that trains on objects in a room and is tested on the same objects in the room after rearrangement. In our experiments we are interested in generalization to new environments in order to avoid training in each new room. More recently, @cite_25 shows how deep-learning for comparing instances can be applied to instance classification and outperform classic matching methods. For our data, we are also interested in instance detection, including localization in a large image. We use the system from @cite_33 to build a much faster detector for object instances than would be possible with explicit matching. | {
"cite_N": [
"@cite_9",
"@cite_25",
"@cite_33"
],
"mid": [
"826954055",
"2410613348",
"2193145675"
],
"abstract": [
"While general object recognition is still far from being solved, this paper proposes a way for a robot to recognize every object at an almost human-level accuracy. Our key observation is that many robots will stay in a relatively closed environment (e.g. a house or an office). By constraining a robot to stay in a limited territory, we can ensure that the robot has seen most objects before and the speed of introducing a new object is slow. Furthermore, we can build a 3D map of the environment to reliably subtract the background to make recognition easier. We propose extremely robust algorithms to obtain a 3D map and enable humans to collectively annotate objects. During testing time, our algorithm can recognize all objects very reliably, and query humans from crowd sourcing platform if confidence is low or new objects are identified. This paper explains design decisions in building such a system, and constructs a benchmark for extensive evaluation. Experiments suggest that making robot vision appear to be working from an end user's perspective is a reachable goal today, as long as the robot stays in a closed environment. By formulating this task, we hope to lay the foundation of a new direction in vision for robotics. Code and data will be available upon acceptance.",
"Some robots must repeatedly interact with a fixed set of objects in their environment. To operate correctly, it is helpful for the robot to be able to recognize the object instances that it repeatedly encounters. However, current methods for recognizing object instances require that, during training, many pictures are taken of each object from a large number of viewing angles. This procedure is slow and requires much manual effort before the robot can begin to operate in a new environment. We have developed a novel procedure for training a neural network to recognize a set of objects from just a single training image per object. To obtain robustness to changes in viewpoint, we take advantage of a supplementary dataset in which we observe a separate (non-overlapping) set of objects from multiple viewpoints. After pre-training the network in a novel multi-stage fashion, the network can robustly recognize new object instances given just a single training image of each object. If more images of each object are available, the performance improves. We perform a thorough analysis comparing our novel training procedure to traditional neural network pre-training techniques as well as previous state-of-the-art approaches including keypoint-matching, template-matching, and sparse coding, and we demonstrate that our method significantly outperforms these previous approaches. Our method can thus be used to easily teach a robot to recognize a novel set of object instances from unknown viewpoints.",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd."
]
} |
1702.07826 | 2594164664 | We introduce AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had performed the behavior. We describe a rationalization technique that uses neural machine translation to translate internal state-action representations of an autonomous agent into natural language. We evaluate our technique in the Frogger game environment, training an autonomous game playing agent to rationalize its action choices using natural language. A natural language training corpus is collected from human players thinking out loud as they play the game. We motivate the use of rationalization as an approach to explanation generation and show the results of two experiments evaluating the effectiveness of rationalization. Results of these evaluations show that neural machine translation is able to accurately generate rationalizations that describe agent behavior, and that rationalizations are more satisfying to humans than other alternative methods of explanation. | An alternate approach to creating interpretable machine learning models involves creating separate models of explainability that are often built on top of black box techniques such as neural networks. These approaches, sometimes called @cite_3 @cite_5 @cite_9 approaches, allow greater flexibility in model selection since they enable black-box models to become interpretable. Other approaches seek to learn a naturally interpretable model which describes predictions that were made @cite_1 or by intelligently modifying model inputs so that resulting models can describe how outputs are affected @cite_3 . | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_1",
"@cite_3"
],
"mid": [
"2952186574",
"1825675169",
"2394669110",
"2282821441"
],
"abstract": [
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.",
"Understanding predictive models, in terms of interpreting and identifying actionable insights, is a challenging task. Often the importance of a feature in a model is only a rough estimate condensed into one number. However, our research goes beyond these naive estimates through the design and implementation of an interactive visual analytics system, Prospector. By providing interactive partial dependence diagnostics, data scientists can understand how features affect the prediction overall. In addition, our support for localized inspection allows data scientists to understand how and why specific datapoints are predicted as they are, as well as support for tweaking feature values and seeing how the prediction responds. Our system is then evaluated using a case study involving a team of data scientists improving predictive models for detecting the onset of diabetes from electronic medical records.",
"Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted."
]
} |
1702.07826 | 2594164664 | We introduce AI rationalization, an approach for generating explanations of autonomous system behavior as if a human had performed the behavior. We describe a rationalization technique that uses neural machine translation to translate internal state-action representations of an autonomous agent into natural language. We evaluate our technique in the Frogger game environment, training an autonomous game playing agent to rationalize its action choices using natural language. A natural language training corpus is collected from human players thinking out loud as they play the game. We motivate the use of rationalization as an approach to explanation generation and show the results of two experiments evaluating the effectiveness of rationalization. Results of these evaluations show that neural machine translation is able to accurately generate rationalizations that describe agent behavior, and that rationalizations are more satisfying to humans than other alternative methods of explanation. | Explainable AI has been explored in the context of ad-hoc techniques for transforming simulation logs to explanations @cite_4 , intelligent tutoring systems @cite_15 , transforming AI plans into natural language @cite_18 , and translating multiagent communication policies into natural language @cite_17 . Our work differs in that the generated rationalizations do not need to be truly representative of the algorithm's decision-making process. This is a novel way of applying explainable AI techniques to sequential decision-making in stochastic domains. | {
"cite_N": [
"@cite_15",
"@cite_4",
"@cite_18",
"@cite_17"
],
"mid": [
"2962861173",
"",
"2135583756",
"2906586541"
],
"abstract": [
"We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if...then... statements (for example, if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS2 score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS2, but more accurate.",
"",
"Opponent behavior in today's computer games is often the result of a static set of Artificial Intelligence (AI) behaviors or a fixed AI script. While this ensures that the behavior is reasonably intelligent, it also results in very predictable behavior. This can have an impact on the replayability of entertainment-based games and the educational value of training-based games. This paper proposes a move away from static, scripted AI by using a combination of deliberative and reactive planning. The deliberative planning (or Strategic AI) system creates a novel strategy for the AI opponent before each gaming session. The reactive planning (or Tactical AI) system executes this strategy in real-time and adapts to the player and the environment. These two systems, in conjunction with a future automated director module, form the Adaptive Opponent Architecture. This paper describes the architecture and the details of the deliberative and reactive planning components.",
"Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents' messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language."
]
} |
1702.08005 | 2950759632 | Retaining players over an extended period of time is a long-standing challenge in game industry. Significant effort has been paid to understanding what motivates players enjoy games. While individuals may have varying reasons to play or abandon a game at different stages within the game, previous studies have looked at the retention problem from a snapshot view. This study, by analyzing in-game logs of 51,104 distinct individuals in an online multiplayer game, uniquely offers a multifaceted view of the retention problem over the players' virtual life phases. We find that key indicators of longevity change with the game level. Achievement features are important for players at the initial to the advanced phases, yet social features become the most predictive of longevity once players reach the highest level offered by the game. These findings have theoretical and practical implications for designing online games that are adaptive to meeting the players' needs. | Among recent findings, @cite_20 asked World of Warcraft players about their motivation for play and game usage patterns through questionnaire surveys and found that socially-motivated players are more likely to discontinue games while achievement-oriented players tend to continue. @cite_2 built a prediction model of player motivation from log data. From data mining experiments using player activity logs from Everquest II, they found achievement is a dominant motivation for predicting player churn (i.e., opposite of player retention). Above studies consistently report that achievement is a major motivation for retention in online games. On the other hand, some studies found social activity to be more important for retention. Based on the log data of EverQuest II, a study showed social influences from peers help predict player retention better @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_20",
"@cite_2"
],
"mid": [
"",
"2059056730",
"2067225096"
],
"abstract": [
"",
"We analyze mechanisms of player retention and commitment in massively multiplayer online games. Our ground assumptions on player retention are based on a marketing model of customer retention and commitment. To measure the influence of gameplay, in-game sociality, and real-life status on player commitment, we use the following metrics: weekly play time, stop rate and number of years respondents have been playing the game. The cross-cultural sample is composed of 2865 World of Warcraft players from North-America, Europe, Taiwan, and Hong-Kong who completed an online questionnaire. We differentiate players in terms of demographic categories including age, region, gender and marital status.",
"In this paper, we investigate the problem of churn prediction in Massively multiplayer online role-playing games (MMORPGs) from a social science perspective and develop models incorporating theories of player motivation. The ability to predict player churn can be a valuable resource to game developers designing customer retention strategies. The results from our theory-driven model significantly outperform a diffusion-based churn prediction model on the same dataset. We describe the synthesis between a theory-driven approach and a data-driven approach to a problem and examine the trade-offs involved between the two approaches in terms of prediction accuracy, interpretability and model complexity. We observe that even though the theory-driven model is not as accurate as the data-driven one, the theory-driven model itself can be more interpretable to the domain experts and hence, more preferable over a complex data-driven model. We perform lift analysis of the two models and find that if a marketing effort is restricted in the number of customers it can contact, the theory-driven model would offer much better return-on-investment by identifying more customers among that restricted set who have the highest probability of churn. Finally, we use a clustering technique to partition the dataset and then build an ensemble on the partitioned dataset for better performance. Experiment results show that the ensemble performs notably better than the single classifier in terms of its recall value, which is a highly desirable property in the churn prediction problem."
]
} |
1702.08005 | 2950759632 | Retaining players over an extended period of time is a long-standing challenge in game industry. Significant effort has been paid to understanding what motivates players enjoy games. While individuals may have varying reasons to play or abandon a game at different stages within the game, previous studies have looked at the retention problem from a snapshot view. This study, by analyzing in-game logs of 51,104 distinct individuals in an online multiplayer game, uniquely offers a multifaceted view of the retention problem over the players' virtual life phases. We find that key indicators of longevity change with the game level. Achievement features are important for players at the initial to the advanced phases, yet social features become the most predictive of longevity once players reach the highest level offered by the game. These findings have theoretical and practical implications for designing online games that are adaptive to meeting the players' needs. | Most recently, another group of researchers observed that game interactions such as interacting with toxic players can have negative impacts on retention in League of Legends @cite_5 . As cyberbullying has been considered as one of the factors that make players annoyed, feel fatigued, and even leave the game @cite_15 , there have been much efforts to define, detect, and prevent toxic playing in online games @cite_17 @cite_18 @cite_14 . However, in this work, we do not investigate the effect of cyberbullying on player engagement due to the limitation of our dataset. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2160600179",
"1909046735",
"",
"1568035076",
"2949748152"
],
"abstract": [
"One problem facing players of competitive games is negative, or toxic, behavior. League of Legends, the largest eSport game, uses a crowdsourcing platform called the Tribunal to judge whether a reported toxic player should be punished or not. The Tribunal is a two stage system requiring reports from those players that directly observe toxic behavior, and human experts that review aggregated reports. While this system has successfully dealt with the vague nature of toxic behavior by majority rules based on many votes, it naturally requires tremendous cost, time, and human efforts. In this paper, we propose a supervised learning approach for predicting crowdsourced decisions on toxic behavior with large-scale labeled data collections; over 10 million user reports involved in 1.46 million toxic players and corresponding crowdsourced decisions. Our result shows good performance in detecting overwhelmingly majority cases and predicting crowdsourced decisions on them. We demonstrate good portability of our classifier across regions. Finally, we estimate the practical implications of our approach, potential cost savings and victim protection.",
"In this paper we explore the linguistic components of toxic behavior by using crowdsourced data from over 590 thousand cases of accused toxic players in a popular match-based competition game, League of Legends. We perform a series of linguistic analyses to gain a deeper understanding of the role communication plays in the expression of toxic behavior. We characterize linguistic behavior of toxic players and compare it with that of typical players in an online competition game. We also find empirical support describing how a player transitions from typical to toxic behavior. Our findings can be helpful to automatically detect and warn players who may become toxic and thus insulate potential victims from toxic playing in advance.",
"",
"From the Publisher: A soup-to-nuts overview of just what it takes to successfully design, develop and manage an online game. Learn from the top two online game developers through the real-world successes and mistakes not known to others. There are Case studies from 10+ industry leaders, including Raph Koster, J. Baron, R. Bartle, D. Schubert, A. Macris, and more! Covers all types of online games: Retail Hybrids, Persistent Worlds, and console games. Developing Online Games provides insight into designing, developing and managing online games that is available nowhere else. Online game programming guru Jessica Mulligan and seasoned exec Bridgette Patrovsky provide insights into the industry that will allow others entering this market to avoid the mistakes of the past. In addition to their own experiences, the authors provide interviews, insight and anecdotes from over twenty of the most well-known and experienced online game insiders. The book includes case studies of the successes and failures of today's most well-known online games. There is also a special section for senior executives on how to budget an online game and how to assemble the right development and management teams. The book ends with a look at the future of online gaming: not only online console gaming (Xbox Online, Playstation 2), but the emerging mobile device game market (cell phones, wireless, PDA).",
"In this work we explore cyberbullying and other toxic behavior in team competition online games. Using a dataset of over 10 million player reports on 1.46 million toxic players along with corresponding crowdsourced decisions, we test several hypotheses drawn from theories explaining toxic behavior. Besides providing large-scale, empirical based understanding of toxic behavior, our work can be used as a basis for building systems to detect, prevent, and counter-act toxic behavior."
]
} |
1702.08005 | 2950759632 | Retaining players over an extended period of time is a long-standing challenge in game industry. Significant effort has been paid to understanding what motivates players enjoy games. While individuals may have varying reasons to play or abandon a game at different stages within the game, previous studies have looked at the retention problem from a snapshot view. This study, by analyzing in-game logs of 51,104 distinct individuals in an online multiplayer game, uniquely offers a multifaceted view of the retention problem over the players' virtual life phases. We find that key indicators of longevity change with the game level. Achievement features are important for players at the initial to the advanced phases, yet social features become the most predictive of longevity once players reach the highest level offered by the game. These findings have theoretical and practical implications for designing online games that are adaptive to meeting the players' needs. | While many studies have put efforts to contribute to understanding player retention, much of the findings have been drawn from a snapshot view---players aggregated by demographic features yet not considering within a game. Like human life itself, players face different challenges and engage in specific actions depending on their levels. For example, @cite_11 observed that online game players are more likely to play alone in an early stage, but become socially active at higher level. Players need to collaborate with one another to defeat strong monsters or complete difficult quests as their level elevates. Moreover, players enjoy an entirely different in-game experiences once they achieve the maximum level, as they become socially active without consuming much game content @cite_4 . This means that factors leading to higher levels or being retained may be different across the entire player lifetime within online games. | {
"cite_N": [
"@cite_4",
"@cite_11"
],
"mid": [
"2100677679",
"2119243634"
],
"abstract": [
"World of Warcraft (WoW) is one of the most popular massively multiplayer games (MMOs) to date, with more than 6 million subscribers worldwide. This article uses data collected over 8 months with automated “bots” to explore how WoW functions as a game. The focus is on metrics reflecting a player’s gaming experience: how long they play, the classes and races they prefer, and so on. The authors then discuss why and how players remain committed to this game, how WoW’s design partitions players into groups with varying backgrounds and aspirations, and finally how players “consume” the game’s content, with a particular focus on the endgame at Level 60 and the impact of player-versus-player-combat. The data illustrate how WoW refined a formula inherited from preceding MMOs. In several places, it also raises questions about WoW’s future growth and more generally about the ability of MMOs to evolve beyond their familiar template.",
"Massively Multiplayer Online Games (MMOGs) routinely attract millions of players but little empirical data is available to assess their players' social experiences. In this paper, we use longitudinal data collected directly from the game to examine play and grouping patterns in one of the largest MMOGs: World of Warcraft. Our observations show that the prevalence and extent of social activities in MMOGs might have been previously over-estimated, and that gaming communities face important challenges affecting their cohesion and eventual longevity. We discuss the implications of our findings for the design of future games and other online social spaces."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | Database literature traditionally considers three classes of database failures @cite_1 , which are summarized in Table (along with single-page failures, a fourth class to be discussed in ). In the scope of this paper, it is important to distinguish between system and media failures, which are conceptually quite different in their causes, effects, and recovery measures. System failures are usually caused by a software fault or power loss, and what is lost---hence what must be recovered---is the state of the server process in main memory; this typically entails recovering page images in the buffer pool (i.e., repeating history'' @cite_6 ) as well as lists of active transactions and their acquired locks, so that they can be properly aborted. The process of recovering from system failures is called . | {
"cite_N": [
"@cite_1",
"@cite_6"
],
"mid": [
"2010042648",
"2104954161"
],
"abstract": [
"In this paper, a terminological framework is provided for describing different transactionoriented recovery schemes for database systems in a conceptual rather than an implementation-dependent way. By introducing the terms materialized database, propagation strategy, and checkpoint, we obtain a means for classifying arbitrary implementations from a unified viewpoint. This is complemented by a classification scheme for logging techniques, which are precisely defined by using the other terms. It is shown that these criteria are related to all relevant questions such as speed and scope of recovery and amount of redundant information required. The primary purpose of this paper, however, is to establish an adequate and precise terminology for a topic in which the confusion of concepts and implementational aspects still imposes a lot of problems.",
"DB2 TM , IMS, and Tandem TM systems. ARIES is applicable not only to database management systems but also to persistent object-oriented languages, recoverable file systems and transaction-based operating systems. ARIES has been implemented, to varying degrees, in IBM's OS 2 TM Extended Edition Database Manager, DB2, Workstation Data Save Facility VM, Starburst and QuickSilver, and in the University of Wisconsin's EXODUS and Gamma database machine."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | In a media failure, which is the focus here, a persistent storage device fails but the system might continue running, serving transactions that only touch data in the buffer pool or on other healthy devices. If the system and media failures happen simultaneously, or perhaps one as a cause of the other, their recovery processes are executed independently, and, by recovering pages in the buffer pool, the processes coordinate transparently. Readers are referred to the literature for further details @cite_23 . | {
"cite_N": [
"@cite_23"
],
"mid": [
"2018464987"
],
"abstract": [
"Transactional Information Systems is the long-awaited, comprehensive work from leading scientists in the transaction processing field. Weikum and Vossen begin with a broad look at the role of transactional technology in today's economic and scientific endeavors, then delve into critical issues faced by all practitioners, presenting today's most effective techniques for controlling concurrent access by multiple clients, recovering from system failures, and coordinating distributed transactions. The authors emphasize formal models that are easily applied across fields, that promise to remain valid as current technologies evolve, and that lend themselves to generalization and extension in the development of new classes of network-centric, functionally rich applications. This book's purpose and achievement is the presentation of the foundations of transactional systems as well as the practical aspects of the field what will help you meet today's challenges. * Provides the most advanced coverage of the topic available anywhere--along with the database background required for you to make full use of this material. * Explores transaction processing both generically as a broadly applicable set of information technology practices and specifically as a group of techniques for meeting the goals of your enterprise. * Contains information essential to developers of Web-based e-Commerce functionality--and a wide range of more \"traditional\" applications. * Details the algorithms underlying core transaction processing functionality. Table of Contents PART ONE - BACKGROUND AND MOTIVATION Chapter 1 What Is It All About? Chapter 2 Computational Models PART TWO - CONCURRENCY CONTROL Chapter 3 Concurrency Control: Notions of Correctness for the Page Model Chapter 4 Concurrency Control Algorithms Chapter 5 Multiversion Concurrency Control Chapter 6 Concurrency Control on Objects: Notions of Correctness Chapter 7 Concurrency Control Algorithms on Objects Chapter 8 Concurrency Control on Relational Databases Chapter 9 Concurrency Control on Search Structures Chapter 10 Implementation and Pragmatic Issues PART THREE - RECOVERY Chapter 11 Transaction Recovery Chapter 12 Crash Recovery: Notion of Correctness Chapter 13 Page Model Crash Recovery Algorithms Chapter 14 Object Model Crash Recovery Chapter 15 Special Issues of Recovery Chapter 16 Media Recovery Chapter 17 Application Recovery PART FOUR - COORDINATION OF DISTRIBUTED TRANSACTIONS Chapter 18 Distributed Concurrency Control Chapter 19 Distributed Transaction Recovery PART FIVE - APPLICATIONS AND FUTURE PERSPECTIVES Chapter 20 What Is Next?"
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | Techniques to recover databases from media failures were initially presented in the seminal work of Gray @cite_15 and later incorporated into the ARIES family of recovery algorithms @cite_6 . In ARIES, restore after a media failure first loads a backup image and then applies a redo log scan, similar to the redo scan of restart after a system failure. Fig. illustrates the process, which we now briefly describe. After loading full and incremental backups into the replacement device, a sequential scan is performed on the log archive and each update is replayed on its corresponding page in the buffer pool. A global value (called media recovery redo point'' by @cite_6 ) is maintained on backup devices to determine the begin point of the log scan. | {
"cite_N": [
"@cite_15",
"@cite_6"
],
"mid": [
"2106887953",
"2104954161"
],
"abstract": [
"This paper is a compendium of data base management operating systems folklore. It is an early paper and is still in draft form. It is intended as a set of course notes for a class on data base operating systems. After a brief overview of what a data management system is it focuses on particular issues unique to the transaction management component especially locking and recovery.",
"DB2 TM , IMS, and Tandem TM systems. ARIES is applicable not only to database management systems but also to persistent object-oriented languages, recoverable file systems and transaction-based operating systems. ARIES has been implemented, to varying degrees, in IBM's OS 2 TM Extended Edition Database Manager, DB2, Workstation Data Save Facility VM, Starburst and QuickSilver, and in the University of Wisconsin's EXODUS and Gamma database machine."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | Because log records are ordered strictly by LSN, pages are read into the buffer pool in random order, as illustrated in the restoration of pages A and B in Fig. . Furthermore, as the buffer pool fills up, they are also written in random order into the replacement device, except perhaps for some minor degree of clustering. As the log scan progresses, evicted pages might be read multiple times, also randomly. This mechanism is quite inefficient, especially for magnetic drives with high access latencies. Thus, it is no surprise that multiple hours of downtime are required in systems with high-capacity drives and high transaction rates @cite_17 . | {
"cite_N": [
"@cite_17"
],
"mid": [
"2403140293"
],
"abstract": [
"When persistent storage fails, traditional media recovery first restores an old backup image followed by replaying the recovery log since the last backup operation. Restoring a backup can take hours, but log replay often takes much longer due to its random access pattern. We introduce single-pass restore, a technique in which restoration of all backups and log replay are performed in a single operation. This allows hiding log replay within the initial restore of the backup, thus substantially reducing the time and cost of media recovery and, incidentally, rendering incremental backup techniques unnecessary. Single-pass restore is enabled by a new organization of the log archive, created by a continuous process that is easily incorporated into the traditional log archiving process. Our empirical analysis shows that the imposed overhead is negligible in comparison with the substantial benefits provided."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | Despite a variety of optimizations proposed to the basic ARIES algorithm @cite_6 @cite_11 @cite_21 , none of them solves these problems in a general and effective manner. In summary, all proposed techniques that enable earlier access to recovered data items suffer from the same problem: early access is only provided for data for which early access is not really needed---hot data in the application working set is not prioritized and most accesses must wait for complete recovery. | {
"cite_N": [
"@cite_21",
"@cite_6",
"@cite_11"
],
"mid": [
"2097035957",
"2104954161",
""
],
"abstract": [
"A method for managing a remote backup database to provide protection from disasters that destroy the primary database is presented. The method is general enough to accommodate the ARIES-type recovery and concurrency control methods as well as the methods used by other systems such as DB2, DL I and IMS Fast Path. It provides high performance by exploiting parallelism and by reducing inputs and outputs using different means, like log analysis and choosing a different buffer management policy from the primary one. Techniques are proposed for checkpointing the state of the backup system so that recovery can be performed quickly in case the backup system fails, and for allowing new transaction activity to begin even as the backup is taking over a primary failure. Some performance measurements taken from a prototype are also presented. >",
"DB2 TM , IMS, and Tandem TM systems. ARIES is applicable not only to database management systems but also to persistent object-oriented languages, recoverable file systems and transaction-based operating systems. ARIES has been implemented, to varying degrees, in IBM's OS 2 TM Extended Edition Database Manager, DB2, Workstation Data Save Facility VM, Starburst and QuickSilver, and in the University of Wisconsin's EXODUS and Gamma database machine.",
""
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | Finally, industrial database systems that implement ARIES recovery suffer from the same problems. IBM’s DB2 speeds up log replay by sorting log records after restoring the backup and before applying the log records to the replacement database @cite_9 . While a sorted log enables a more efficient access pattern, incremental and on-demand restoration is not provided. Furthermore, the delay imposed by the offline sort may be as high as the total downtime incurred by the traditional method. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1812818645"
],
"abstract": [
"The present invention discloses a technique for restoring a database in a computer. In accordance with the present invention, the database contains objects and is stored on a data storage device connected to the computer. After a system failure, a log file is read. The log file contains one or more modifications to the database objects. Each modification has an associated data page and time stamp or sequence number. The modifications are sorted by at least one predefined sorting key value. The sorted modifications are then grouped by database object. The sorted modifications are applied to each database object in parallel."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | Recent proposals for recovery on both volatile and non-volatile in-memory systems usually ignore the problem of media failures, employing the unspecific term recovery'' to describe system restart only @cite_7 @cite_25 @cite_12 @cite_29 @cite_5 . Therefore, recovery from media failures in modern systems either relies on the traditional techniques or is simply not supported, employing replication as the only means to maintain service upon storage hardware faults. As discussed above, while relying on replication is a valid solution to increase mean time to failure, a highly available system must also provide efficient repair facilities. In this aspect, traditional database system designs---using ARIES physiological logging and buffer management---provide more reliable behavior. Therefore, we believe that improving traditional techniques for more efficient recovery with low overhead on memory-optimized workloads is an important open research challenge. | {
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_5",
"@cite_25",
"@cite_12"
],
"mid": [
"2397097813",
"1988852271",
"2164999122",
"2071414195",
"2102729946"
],
"abstract": [
"Hekaton is a new OLTP engine optimized for memory resident data and fully integrated into SQL Server; a database can contain both regular disk-based tables and in-memory tables. In-memory (a.k.a.Hekaton) tables are fully durable and accessed using standard T-SQL. A query can reference both Hekaton tables and regular tables and a transaction can update data in both types of tables. T-SQL stored procedures that reference only Hekaton tables are compiled into machine code for further performance improvements. To allow for high concurrency the engine uses latch-free data structures and optimistic, multiversion concurrency control. This paper gives an overview of the design of the Hekaton engine and reports some initial results.",
"The advent of non-volatile memory (NVM) will fundamentally change the dichotomy between memory and durable storage in database management systems (DBMSs). These new NVM devices are almost as fast as DRAM, but all writes to it are potentially persistent even after power loss. Existing DBMSs are unable to take full advantage of this technology because their internal architectures are predicated on the assumption that memory is volatile. With NVM, many of the components of legacy DBMSs are unnecessary and will degrade the performance of data intensive applications. To better understand these issues, we implemented three engines in a modular DBMS testbed that are based on different storage management architectures: (1) in-place updates, (2) copy-on-write updates, and (3) log-structured updates. We then present NVM-aware variants of these architectures that leverage the persistence and byte-addressability properties of NVM in their storage and recovery methods. Our experimental evaluation on an NVM hardware emulator shows that these engines achieve up to 5.5X higher throughput than their traditional counterparts while reducing the amount of wear due to write operations by up to 2X. We also demonstrate that our NVM-aware recovery protocols allow these engines to recover almost instantaneously after the DBMS restarts.",
"Storage Class Memory (SCM) has the potential to significantly improve database performance. This potential has been well documented for throughput [4] and response time [25, 22]. In this paper we show that SCM has also the potential to significantly improve restart performance, a shortcoming of traditional main memory database systems. We present SOFORT, a hybrid SCM-DRAM storage engine that leverages full capabilities of SCM by doing away with a traditional log and updating the persisted data in place in small increments. We show that we can achieve restart times of a few seconds independent of instance size and transaction volume without significantly impacting transaction throughput.",
"Fine-grained, record-oriented write-ahead logging, as exemplified by systems like ARIES, has been the gold standard for relational database recovery. In this paper, we show that in modern high-throughput transaction processing systems, this is no longer the optimal way to recover a database system. In particular, as transaction throughputs get higher, ARIES-style logging starts to represent a non-trivial fraction of the overall transaction execution time.",
"The two areas of online transaction processing (OLTP) and online analytical processing (OLAP) present different challenges for database architectures. Currently, customers with high rates of mission-critical transactions have split their data into two separate systems, one database for OLTP and one so-called data warehouse for OLAP. While allowing for decent transaction rates, this separation has many disadvantages including data freshness issues due to the delay caused by only periodically initiating the Extract Transform Load-data staging and excessive resource consumption due to maintaining two separate information systems. We present an efficient hybrid system, called HyPer, that can handle both OLTP and OLAP simultaneously by using hardware-assisted replication mechanisms to maintain consistent snapshots of the transactional data. HyPer is a main-memory database system that guarantees the ACID properties of OLTP transactions and executes OLAP query sessions (multiple queries) on the same, arbitrarily current and consistent snapshot. The utilization of the processor-inherent support for virtual memory management (address translation, caching, copy on update) yields both at the same time: unprecedentedly high transaction rates as high as 100000 per second and very fast OLAP query response times on a single system executing both workloads in parallel. The performance analysis is based on a combined TPC-C and TPC-H benchmark."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | While the benefit of on-demand and incremental restore is a major advantage over traditional ARIES recovery, this algorithm still suffers from the first deficiency discussed in ---namely the inefficient access pattern. The authors of the original publication even foresee the application to media failures @cite_28 , arguing that while a page is the unit of recovery, multiple pages can be repaired in bulk in a coordinated fashion. However, the access pattern with larger restoration granules would approach that of traditional ARIES restore---i.e., random access during log replay. Thus, while the technique introduces a very useful degree of flexibility, it does not provide a unified solution for the two deficiencies discussed. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2112774785"
],
"abstract": [
"The three traditional failure classes are system, media, and transaction failures. Sometimes, however, modern storage exhibits failures that differ from all of those. In order to capture and describe such cases, single-page failures are introduced as a fourth failure class. This class encompasses all failures to read a data page correctly and with plausible contents despite all correction attempts in lower system levels. Efficient recovery seems to require a new data structure called the page recovery index. Its transactional maintenance can be accomplished writing the same number of log records as today's efficient implementations of logging and recovery. Detection and recovery of a single-page failure can be sufficiently fast that the affected data access is merely delayed, without the need to abort the transaction."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | Our previous work introduced a technique called single-pass restore, which aims to perform media recovery in a single sequential pass over both backup and log archive devices @cite_17 . Eliminating random access effectively addresses the first deficiency discussed in . This is achieved by partially sorting the log on page identifiers, using a stable sort to maintain LSN order within log records of the same page. The access pattern is essentially the same as that of a sort-merge join: external sort with run generation and merge followed by another merge between the two inputs---log and backup in the media recovery case. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2403140293"
],
"abstract": [
"When persistent storage fails, traditional media recovery first restores an old backup image followed by replaying the recovery log since the last backup operation. Restoring a backup can take hours, but log replay often takes much longer due to its random access pattern. We introduce single-pass restore, a technique in which restoration of all backups and log replay are performed in a single operation. This allows hiding log replay within the initial restore of the backup, thus substantially reducing the time and cost of media recovery and, incidentally, rendering incremental backup techniques unnecessary. Single-pass restore is enabled by a new organization of the log archive, created by a continuous process that is easily incorporated into the traditional log archiving process. Our empirical analysis shows that the imposed overhead is negligible in comparison with the substantial benefits provided."
]
} |
1702.08042 | 2952397148 | Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques. | The idea itself is as old as the first recovery algorithms (see Section 5.8.5.1 of Gray's paper @cite_15 ) and is even employed in DB2's fast log apply'' @cite_9 . However, the key advantage of single-pass restore is that the two phases of the sorting process---run generation and merge---are performed independently: sorted runs are generated during the log archiving process (i.e., moving log records from the latency-optimized transaction log device into high-capacity, bandwidth-optimized secondary storage) with negligible overhead; the merge phase, on the other hand, happens both asynchronously as a maintenance service and also during media recovery, in order to obtain a single sorted log stream for recovery. Importantly, merging runs of the log archive and applying the log records to backed-up pages can be done in a single sequential pass, similar to a merge join. The process is illustrated in Fig. . We refer to the original publication for further details @cite_17 . | {
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_17"
],
"mid": [
"1812818645",
"2106887953",
"2403140293"
],
"abstract": [
"The present invention discloses a technique for restoring a database in a computer. In accordance with the present invention, the database contains objects and is stored on a data storage device connected to the computer. After a system failure, a log file is read. The log file contains one or more modifications to the database objects. Each modification has an associated data page and time stamp or sequence number. The modifications are sorted by at least one predefined sorting key value. The sorted modifications are then grouped by database object. The sorted modifications are applied to each database object in parallel.",
"This paper is a compendium of data base management operating systems folklore. It is an early paper and is still in draft form. It is intended as a set of course notes for a class on data base operating systems. After a brief overview of what a data management system is it focuses on particular issues unique to the transaction management component especially locking and recovery.",
"When persistent storage fails, traditional media recovery first restores an old backup image followed by replaying the recovery log since the last backup operation. Restoring a backup can take hours, but log replay often takes much longer due to its random access pattern. We introduce single-pass restore, a technique in which restoration of all backups and log replay are performed in a single operation. This allows hiding log replay within the initial restore of the backup, thus substantially reducing the time and cost of media recovery and, incidentally, rendering incremental backup techniques unnecessary. Single-pass restore is enabled by a new organization of the log archive, created by a continuous process that is easily incorporated into the traditional log archiving process. Our empirical analysis shows that the imposed overhead is negligible in comparison with the substantial benefits provided."
]
} |
1702.07935 | 2594098573 | Abstract Low-textured image stitching remains a challenging problem. It is difficult to achieve good alignment of images and it is easy to break images structures are often broken due to insufficient and unreliable point correspondences. Moreover, because of the viewpoint variations between multiple images, the stitched images suffer from projective distortions. To solve these problems, this paper presents a line-guided local warping method with a global similarity constraint for image stitching. Line features which serve well for geometric descriptions and scene constraints, are employed to guide image stitching accurately. On one hand, the line features are integrated into a local warping model through a designed weight function. On the other hand, line features are adopted to impose strong geometric constraints, including line correspondence and line colinearity, to improve the stitching performance through mesh optimization. To mitigate projective distortions, we adopt a global similarity constraint, which is integrated with the projective warps via a designed weight strategy. This constraint causes the final warp to slowly change from a projective to a similarity transformation across the image. Finally, the images undergo a two-stage alignment scheme that provides accurate alignment and reduces projective distortion. We evaluate our method on a series of images and compare it with several other methods. The experimental results demonstrate that the proposed method provides a convincing stitching performance and that it outperforms other state-of-the-art methods. | Numerous studies have been devoted to image stitching; a comprehensive survey can be found in @cite_23 . The global homography model @cite_4 works well for planar scenes or for scenes acquired with parallax-free camera motion, but violation of these assumptions may lead to ghosting artifacts. | {
"cite_N": [
"@cite_4",
"@cite_23"
],
"mid": [
"2126060993",
"2165949425"
],
"abstract": [
"This paper concerns the problem of fully automated panoramic image stitching. Though the 1D problem (single axis of rotation) is well studied, 2D or multi-row stitching is more difficult. Previous approaches have used human input or restrictions on the image sequence in order to establish matching images. In this work, we formulate stitching as a multi-image matching problem, and use invariant local features to find matches between all of the images. Because of this our method is insensitive to the ordering, orientation, scale and illumination of the input images. It is also insensitive to noise images that are not part of a panorama, and can recognise multiple panoramas in an unordered image dataset. In addition to providing more detail, this paper extends our previous work in the area (Brown and Lowe, 2003) by introducing gain compensation and automatic straightening steps.",
"This tutorial reviews image alignment and image stitching algorithms. Image alignment algorithms can discover the correspondence relationships among images with varying degrees of overlap. They are ideally suited for applications such as video stabilization, summarization, and the creation of panoramic mosaics. Image stitching algorithms take the alignment estimates produced by such registration algorithms and blend the images in a seamless manner, taking care to deal with potential problems such as blurring or ghosting caused by parallax and scene movement as well as varying image exposures. This tutorial reviews the basic motion models underlying alignment and stitching algorithms, describes effective direct (pixel-based) and feature-based alignment algorithms, and describes blending algorithms used to produce seamless mosaics. It ends with a discussion of open research problems in the area."
]
} |
1702.07935 | 2594098573 | Abstract Low-textured image stitching remains a challenging problem. It is difficult to achieve good alignment of images and it is easy to break images structures are often broken due to insufficient and unreliable point correspondences. Moreover, because of the viewpoint variations between multiple images, the stitched images suffer from projective distortions. To solve these problems, this paper presents a line-guided local warping method with a global similarity constraint for image stitching. Line features which serve well for geometric descriptions and scene constraints, are employed to guide image stitching accurately. On one hand, the line features are integrated into a local warping model through a designed weight function. On the other hand, line features are adopted to impose strong geometric constraints, including line correspondence and line colinearity, to improve the stitching performance through mesh optimization. To mitigate projective distortions, we adopt a global similarity constraint, which is integrated with the projective warps via a designed weight strategy. This constraint causes the final warp to slowly change from a projective to a similarity transformation across the image. Finally, the images undergo a two-stage alignment scheme that provides accurate alignment and reduces projective distortion. We evaluate our method on a series of images and compare it with several other methods. The experimental results demonstrate that the proposed method provides a convincing stitching performance and that it outperforms other state-of-the-art methods. | In recent years, similarity transformation, which is composed of translation, rotation and scaling, was introduced. Similarity transformation constructs a combined warping with projective transformations to constrain the projective distortions. Chang @cite_36 proposed a (SPHP) warping for image stitching that adopts projective, transition and similarity transformation to achieve a gradual change from a projective to a similarity transformation across the image. SPHP can significantly reduce the distortions and preserve the image shape; however, it may introduce structural deformations, line distortions, when the scene is dominated by line structures. Lin @cite_7 proposed an (AANAP) warping that linearizes the homography in the non-overlapping regions and combines these homographies with a global similarity transformation using a direct and simple distance-based weight strategy to mitigate perspective distortions. However, some distortions still exist locally when stitching images (Fig. (b)). | {
"cite_N": [
"@cite_36",
"@cite_7"
],
"mid": [
"1983683849",
"1963246386"
],
"abstract": [
"This paper proposes a novel parametric warp which is a spatial combination of a projective transformation and a similarity transformation. Given the projective transformation relating two input images, based on an analysis of the projective transformation, our method smoothly extrapolates the projective transformation of the overlapping regions into the non-overlapping regions and the resultant warp gradually changes from projective to similarity across the image. The proposed warp has the strengths of both projective and similarity warps. It provides good alignment accuracy as projective warps while preserving the perspective of individual image as similarity warps. It can also be combined with more advanced local-warp-based alignment methods such as the as-projective-as-possible warp for better alignment accuracy. With the proposed warp, the field of view can be extended by stitching images with less projective distortion (stretched shapes and enlarged sizes).",
"The goal of image stitching is to create natural-looking mosaics free of artifacts that may occur due to relative camera motion, illumination changes, and optical aberrations. In this paper, we propose a novel stitching method, that uses a smooth stitching field over the entire target image, while accounting for all the local transformation variations. Computing the warp is fully automated and uses a combination of local homography and global similarity transformations, both of which are estimated with respect to the target. We mitigate the perspective distortion in the non-overlapping regions by linearizing the homography and slowly changing it to the global similarity. The proposed method is easily generalized to multiple images, and allows one to automatically obtain the best perspective in the panorama. It is also more robust to parameter selection, and hence more automated compared with state-of-the-art methods. The benefits of the proposed approach are demonstrated using a variety of challenging cases."
]
} |
1702.07745 | 2593589299 | Social media is often viewed as a sensor into various societ al events such as disease outbreaks, protests, and elections. We describe the use of social media as a crowdsourced sensor to gain insight into ongoing cyber-attacks. Our approach detects a broad range of cyber-attacks (e.g., distributed denial of service (DDoS) attacks, data breaches, and account hijacking) in a weakly supervised manner using just a small set of seed event triggers and requires no training or labeled samples. A new query expansion strategy based on convolution kernels and dependency parses helps model semantic structure and aids in identifying key event characteristics. Through a large-scale analysis over Twitter, we demonstrate that our approach consistently identifies and encodes events, outperforming existing methods. | Cyberattack Detection and Characterization. Detecting and characterizing cyber attacks is highly challenging due to the constant-involving nature of cyber criminals. Recent proposals cover a large range of different methods, and Table lists representative works in this space. Earlier work primarily focuses on mining network traffic data for intrusion detection. Specific techniques range from classifying malicious network flows @cite_28 to anomaly detection in graphs to detect malicious servers and connections @cite_12 @cite_11 @cite_31 @cite_23 . More recently, researchers seek to move ahead to predict cyber attacks before they happened for early notifications @cite_3 . For example, leverage various network data associated to an organization to look for indicators of attacks @cite_15 @cite_36 . By extracting signals from mis-configured DNS and BGP networks as well as spam and phishing activities, they build classifiers to predict if an organization is (or will be) under attack. Similarly, apply supervised classifiers to network traffic data to detect vulnerable websites, and predict their chances of turning malicious in the future @cite_32 . | {
"cite_N": [
"@cite_15",
"@cite_28",
"@cite_36",
"@cite_32",
"@cite_3",
"@cite_23",
"@cite_31",
"@cite_12",
"@cite_11"
],
"mid": [
"1415990210",
"1583975142",
"",
"1420268584",
"2505207162",
"",
"1993370323",
"2108898793",
"2032280284"
],
"abstract": [
"In this study we characterize the extent to which cyber security incidents, such as those referenced by Verizon in its annual Data Breach Investigations Reports (DBIR), can be predicted based on externally observable properties of an organization's network. We seek to proactively forecast an organization's breaches and to do so without cooperation of the organization itself. To accomplish this goal, we collect 258 externally measurable features about an organization's network from two main categories: mismanagement symptoms, such as misconfigured DNS or BGP within a network, and malicious activity time series, which include spam, phishing, and scanning activity sourced from these organizations. Using these features we train and test a Random Forest (RF) classifier against more than 1,000 incident reports taken from the VERIS community database, Hackmageddon, and the Web Hacking Incidents Database that cover events from mid-2013 to the end of 2014. The resulting classifier is able to achieve a 90 True Positive (TP) rate, a 10 False Positive (FP) rate, and an overall 90 accuracy.",
"In this paper we discuss our research in developing general and systematic methods for intrusion detection. The key ideas are to use data mining techniques to discover consistent and useful patterns of system features that describe program and user behavior, and use the set of relevant system features to compute (inductively learned) classifiers that can recognize anomalies and known intrusions. Using experiments on the sendmail system call data and the network tcpdump data, we demonstrate that we can construct concise and accurate classifiers to detect anomalies. We provide an overview on two general data mining algorithms that we have implemented: the association rules algorithm and the frequent episodes algorithm. These algorithms can be used to compute the intra-and inter-audit record patterns, which are essential in describing program or user behavior. The discovered patterns can guide the audit data gathering process and facilitate feature selection. To meet the challenges of both efficient learning (mining) and real-time detection, we propose an agent-based architecture for intrusion detection systems where the learning agents continuously compute and provide the updated (detection) models to the detection agents.",
"",
"Significant recent research advances have made it possible to design systems that can automatically determine with high accuracy the maliciousness of a target website. While highly useful, such systems are reactive by nature. In this paper, we take a complementary approach, and attempt to design, implement, and evaluate a novel classification system which predicts, whether a given, not yet compromised website will become malicious in the future. We adapt several techniques from data mining and machine learning which are particularly well-suited for this problem. A key aspect of our system is that the set of features it relies on is automatically extracted from the data it acquires; this allows us to be able to detect new attack trends relatively quickly. We evaluate our implementation on a corpus of 444,519 websites, containing a total of 4,916,203 webpages, and show that we manage to achieve good detection accuracy over a one-year horizon; that is, we generally manage to correctly predict that currently benign websites will become compromised within a year.",
"Security researchers can send vulnerability notifications to take proactive measures in securing systems at scale. However, the factors affecting a notification’s efficacy have not been deeply explored. In this paper, we report on an extensive study of notifying thousands of parties of security issues present within their networks, with an aim of illuminating which fundamental aspects of notifications have the greatest impact on efficacy. The vulnerabilities used to drive our study span a range of protocols and considerations: exposure of industrial control systems; apparent firewall omissions for IPv6-based services; and exploitation of local systems in DDoS amplification attacks. We monitored vulnerable systems for several weeks to determine their rate of remediation. By comparing with experimental controls, we analyze the impact of a number of variables: choice of party to contact (WHOIS abuse contacts versus national CERTs versus US-CERT), message verbosity, hosting an information website linked to in the message, and translating the message into the notified party’s local language. We also assess the outcome of the emailing process itself (bounces, automated replies, human replies, silence) and characterize the sentiments and perspectives expressed in both the human replies and an optional anonymous survey that accompanied our notifications. We find that various notification regimens do result in different outcomes. The best observed process was directly notifying WHOIS contacts with detailed information in the message itself. These notifications had a statistically significant impact on improving remediation, and human replies were largely positive. However, the majority of notified contacts did not take action, and even when they did, remediation was often only partial. Repeat notifications did not further patching. These results are promising but ultimately modest, behooving the security community to more deeply investigate ways to improve the effectiveness of vulnerability notifications.",
"",
"Malware remains an important security threat, as miscreants continue to deliver a variety of malicious programs to hosts around the world. At the heart of all the malware delivery techniques are executable files (known as downloader trojans or droppers) that download other malware. Because the act of downloading software components from the Internet is not inherently malicious, benign and malicious downloaders are difficult to distinguish based only on their content and behavior. In this paper, we introduce the downloader-graph abstraction, which captures the download activity on end hosts, and we explore the growth patterns of benign and malicious graphs. Downloader graphs have the potential of exposing large parts of the malware download activity, which may otherwise remain undetected. By combining telemetry from anti-virus and intrusion-prevention systems, we reconstruct and analyze 19 million downloader graphs from 5 million real hosts. We identify several strong indicators of malicious activity, such as the growth rate, the diameter, and the Internet access patterns of downloader graphs. Building on these insights, we implement and evaluate a machine learning system for malware detection. Our system achieves a 96.0 true-positive rate, with a 1.0 false-positive rate, and detects malware an average of 9.24 days earlier than existing anti-virus products. We also perform an external validation by examining a sample of unlabeled files that our system detects as malicious, and we find that 41.41 are blocked by anti-virus products.",
"A reasonable definition of intrusion is: entering a community to which one does not belong. This suggests that in a network, intrusion attempts may be detected by looking for communication that does not respect community boundaries. In this paper, we examine the utility of this concept for identifying malicious network sources. In particular, our goal is to explore whether this concept allows a core-network operator using flow data to augment signature-based systems located at network edges. We show that simple measures of communities can be defined for flow data that allow a remarkably effective level of intrusion detection simply by looking for flows that do not respect those communities. We validate our approach using labeled intrusion attempt data collected at a large number of edge networks. Our results suggest that community-based methods can offer an important additional dimension for intrusion detection systems.",
"Anomaly detection is an area that has received much attention in recent years. It has a wide variety of applications, including fraud detection and network intrusion detection. A good deal of research has been performed in this area, often using strings or attribute-value data as the medium from which anomalies are to be extracted. Little work, however, has focused on anomaly detection in graph-based data. In this paper, we introduce two techniques for graph-based anomaly detection. In addition, we introduce a new method for calculating the regularity of a graph, with applications to anomaly detection. We hypothesize that these methods will prove useful both for finding anomalies, and for determining the likelihood of successful anomaly detection within graph-based data. We provide experimental results using both real-world network intrusion data and artificially-created data."
]
} |
1702.07745 | 2593589299 | Social media is often viewed as a sensor into various societ al events such as disease outbreaks, protests, and elections. We describe the use of social media as a crowdsourced sensor to gain insight into ongoing cyber-attacks. Our approach detects a broad range of cyber-attacks (e.g., distributed denial of service (DDoS) attacks, data breaches, and account hijacking) in a weakly supervised manner using just a small set of seed event triggers and requires no training or labeled samples. A new query expansion strategy based on convolution kernels and dependency parses helps model semantic structure and aids in identifying key event characteristics. Through a large-scale analysis over Twitter, we demonstrate that our approach consistently identifies and encodes events, outperforming existing methods. | In recent years, online media such as blogs and social networks become another promising data source of security intelligence @cite_2 @cite_27 . Most existing work focuses on technology blogs and tweets from security professionals to extract useful information @cite_21 . For example, builds text mining tools to extract key attack identifiers (IP, MD5 hashes) from security tech blogs @cite_13 . leverage Twitter data to estimate the level of interest in existing CVE vulnerabilities, and predict their chance of being exploited in practice @cite_25 . Our work differs from existing literature since we focus on crowdsourced data from the much broader user populations who are likely the victims of security attacks. The most related work to ours is @cite_35 which uses weakly supervised learning to detect security related tweets. However, this technique is unable to capture the dynamically evolving nature of attacks and is unable to encode characteristics of detected events. | {
"cite_N": [
"@cite_35",
"@cite_21",
"@cite_27",
"@cite_2",
"@cite_13",
"@cite_25"
],
"mid": [
"2270414365",
"1552949272",
"2902063112",
"2568683544",
"2538865281",
"1707806712"
],
"abstract": [
"Twitter contains a wealth of timely information, however staying on top of breaking events requires that an information analyst constantly scan many sources, leading to information overload. For example, a user might wish to be made aware whenever an infectious disease outbreak takes place, when a new smartphone is announced or when a distributed Denial of Service (DoS) attack might affect an organization's network connectivity. There are many possible event categories an analyst may wish to track, making it impossible to anticipate all those of interest in advance. We therefore propose a weakly supervised approach, in which extractors for new categories of events are easy to define and train, by specifying a small number of seed examples. We cast seed-based event extraction as a learning problem where only positive and unlabeled data is available. Rather than assuming unlabeled instances are negative, as is common in previous work, we propose a learning objective which regularizes the label distribution towards a user-provided expectation. Our approach greatly outperforms heuristic negatives, used in most previous work, in experiments on real-world data. Significant performance gains are also demonstrated over two novel and competitive baselines: semi-supervised EM and one-class support-vector machines. We investigate three security-related events breaking on Twitter: DoS attacks, data breaches and account hijacking. A demonstration of security events extracted by our system is available at: http: kb1.cse.ohio-state.edu:8123 events hacked",
"Organizations and governments are becoming vulnerable to a wide variety of security breaches against their information infrastructure. The magnitude of this threat is evident from the increasing rate of cyber attacks against computers and critical infrastructure. Weblogs, or blogs, have also rapidly gained in numbers over the past decade. Weblogs may provide up-to-date information on the prevalence and distribution of various cyber security threats as well as terrorism events. In this paper, we analyze weblog posts for various categories of cyber security threats related to the detection of cyber attacks, cyber crime, and terrorism. Existing studies on intelligence analysis have focused on analyzing news or forums for cyber security incidents, but few have looked at weblogs. We use probabilistic latent semantic analysis to detect keywords from cyber security weblogs with respect to certain topics. We then demonstrate how this method can present the blogosphere in terms of topics with measurable keywords, hence tracking popular conversations and topics in the blogosphere. By applying a probabilistic approach, we can improve information retrieval in weblog search and keywords detection, and provide an analytical foundation for the future of security intelligence analysis of weblogs.",
"",
"The volume and frequency of new cyber attacks have exploded in recent years. Such events have very complicated workflows and involve multiple criminal actors and organizations. However, current practices for threat analysis and intelligence discovery are still performed piecemeal in an ad-hoc manner. For example, a modern malware analysis system can dissect a piece of malicious code by itself. But, it cannot automatically identify the criminals who developed it or relate other cyber attack events with it. Consequently, it is imperative to automatically assemble the jigsaw puzzles of cybercrime events by performing threat intelligence fusion on data collected from heterogeneous sources, such as malware, underground social networks, cryptocurrency transaction records, etc. In this paper, we propose an Automated Threat Intelligence fuSion framework (ATIS) that is able to take all sorts of threat sources into account and discover new intelligence by connecting the dots of apparently isolated cyber events. To this end, ATIS consists of 5 planes, namely analysis, collection, controller, data and application planes. We discuss the design choices we made in the function of each plane and the interfaces between two adjacent planes. In addition, we develop two applications on top of ATIS to demonstrate its effectiveness.",
"To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., \"download\") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., \"malware\", \"download\") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95 and a coverage over 90 , which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.",
"In recent years, the number of software vulnerabilities discovered has grown significantly. This creates a need for prioritizing the response to new disclosures by assessing which vulnerabilities are likely to be exploited and by quickly ruling out the vulnerabilities that are not actually exploited in the real world. We conduct a quantitative and qualitative exploration of the vulnerability-related information disseminated on Twitter. We then describe the design of a Twitter-based exploit detector, and we introduce a threat model specific to our problem. In addition to response prioritization, our detection techniques have applications in risk modeling for cyber-insurance and they highlight the value of information provided by the victims of attacks."
]
} |
1702.07745 | 2593589299 | Social media is often viewed as a sensor into various societ al events such as disease outbreaks, protests, and elections. We describe the use of social media as a crowdsourced sensor to gain insight into ongoing cyber-attacks. Our approach detects a broad range of cyber-attacks (e.g., distributed denial of service (DDoS) attacks, data breaches, and account hijacking) in a weakly supervised manner using just a small set of seed event triggers and requires no training or labeled samples. A new query expansion strategy based on convolution kernels and dependency parses helps model semantic structure and aids in identifying key event characteristics. Through a large-scale analysis over Twitter, we demonstrate that our approach consistently identifies and encodes events, outperforming existing methods. | Event Extraction and Forecasting on Twitter. Another body of related work focuses on Twitter to extract various events such as trending news @cite_33 @cite_37 , natural disasters @cite_39 , criminal incidents @cite_6 and population migrations @cite_22 . Common event extraction methods include simple keyword matching and clustering, and topic modeling with temporal and geolocation constrains @cite_10 @cite_30 @cite_24 . Event forecasting, on the other hand, aims to predict future evens based on early signals extracted from tweets. Example applications include detecting activity planning @cite_19 and forecasting future events such as civil unrest @cite_41 and upcoming threats to national airports @cite_4 . In our work, we follow a similar intuition to detect signals for major security attacks. The key novelty in our approach, different from these works, is the need for a typed query expansion strategy that provides both focused results and aids in extracting key indicators underlying the cyber-attack. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_33",
"@cite_41",
"@cite_6",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_10"
],
"mid": [
"1534625513",
"",
"2604713108",
"2022154907",
"1890727290",
"1999529874",
"144670803",
"2124499489",
"2110676972",
"1977931290",
"2255743372"
],
"abstract": [
"User-contributed messages on social media sites such as Twitter have emerged aspowerful, real-time means of information sharing on the Web. These short messages tend to reflect a variety of events in real time, making Twitter particularly well suited as a source of real-time event content. In this paper, we explore approaches for analyzing the stream of Twitter messages to distinguish between messages about real-world events andnon-event messages. Our approach relies on a rich family of aggregatestatistics of topically similar message clusters. Large-scale experiments over millions of Twitter messages show the effectiveness of our approach for surfacing real-world event content on Twitter.",
"",
"Airports are a prime target for terrorist organizations, drug traffickers, smugglers, and other nefarious groups. Traditional forms of security assessment are not real-time and often do not exist for each airport and port of entry. Thus, homeland security professionals must rely on measures of attractiveness of an airport as a target for attacks. We present an open source indicators approach, using news and social media, to conduct relative threat assessment, i.e., estimating if one airport is under greater threat than another. The three ingredients of our approach are a dynamic query expansion algorithm for tracking emerging threat-related chatter, news-Twitter reciprocity modeling for capturing interactions between social and traditional media, and a ranking scheme to provide an ordered assessment of airport threats. Case studies based on actual aviation incidents are presented.",
"Nowadays, an ever-growing amount of information is being transferred through web-based social media. In particular, Twitter emerged to be an important social medium providing most up-to-date information and comments on current events and topics of any kind. This led to a continuous growth of the interest of various security-related organizations in tools for real-time monitoring of Twitter streams to collect information there from. In this paper we present some initial explorations on how to exploit Twitter for border security-related intelligence gathering. To be more precise, we present techniques for: (a) retrieving and analyzing tweets posted in third countries, in which opinions and information are provided on migration to Europe or related issues (here we experimented with sentiment analysis for improving the retrieval performance), and (b) enhancing the information extracted from online news on border security-related events in third countries with information extracted from Twitter.",
"Twitter is among the fastest-growing microblogging and online social networking services. Messages posted on Twitter tweets have been reporting everything from daily life stories to the latest local and global news and events. Monitoring and analyzing this rich and continuous user-generated content can yield unprecedentedly valuable information, enabling users and organizations to acquire actionable knowledge. This article provides a survey of techniques for event detection from Twitter streams. These techniques aim at finding real-world occurrences that unfold over space and time. In contrast to conventional media, event detection from Twitter streams poses new challenges. Twitter streams contain large amounts of meaningless messages and polluted content, which negatively affect the detection performance. In addition, traditional text mining techniques are not suitable, because of the short length of tweets, the large number of spelling and grammatical errors, and the frequent use of informal and mixed language. Event detection techniques presented in literature address these issues by adapting techniques from various fields to the uniqueness of Twitter. This article classifies these techniques according to the event type, detection task, and detection method and discusses commonly used features. Finally, it highlights the need for public benchmarks to evaluate the performance of different detection approaches and various features.",
"We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societ al happenings.",
"Prior work on criminal incident prediction has relied primarily on the historical crime record and various geospatial and demographic information sources. Although promising, these models do not take into account the rich and rapidly expanding social media context that surrounds incidents of interest. This paper presents a preliminary investigation of Twitter-based criminal incident prediction. Our approach is based on the automatic semantic analysis and understanding of natural language Twitter posts, combined with dimensionality reduction via latent Dirichlet allocation and prediction via linear modeling. We tested our model on the task of predicting future hit-and-run crimes. Evaluation results indicate that the model comfortably outperforms a baseline model that predicts hit-and-run incidents uniformly across all days.",
"Twitter, a popular microblogging service, has received much attention recently. An important characteristic of Twitter is its real-time nature. For example, when an earthquake occurs, people make many Twitter posts (tweets) related to the earthquake, which enables detection of earthquake occurrence promptly, simply by observing the tweets. As described in this paper, we investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, we devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location. We consider each Twitter user as a sensor and apply Kalman filtering and particle filtering, which are widely used for location estimation in ubiquitous pervasive computing. The particle filter works better than other comparable methods for estimating the centers of earthquakes and the trajectories of typhoons. As an application, we construct an earthquake reporting system in Japan. Because of the numerous earthquakes and the large number of Twitter users throughout the country, we can detect an earthquake with high probability (96 of earthquakes of Japan Meteorological Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our system detects earthquakes promptly and sends e-mails to registered users. Notification is delivered much faster than the announcements that are broadcast by the JMA.",
"In recent years, microblogs have become an important source for reporting real-world events. A real-world occurrence reported in microblogs is also called a social event. Social events may hold critical materials that describe the situations during a crisis. In real applications, such as crisis management and decision making, monitoring the critical events over social streams will enable watch officers to analyze a whole situation that is a composite event, and make the right decision based on the detailed contexts such as what is happening, where an event is happening, and who are involved. Although there has been significant research effort on detecting a target event in social networks based on a single source, in crisis, we often want to analyze the composite events contributed by different social users. So far, the problem of integrating ambiguous views from different users is not well investigated. To address this issue, we propose a novel framework to detect composite social events over streams, which fully exploits the information of social data over multiple dimensions. Specifically, we first propose a graphical model called location-time constrained topic (LTT) to capture the content, time, and location of social messages. Using LTT, a social message is represented as a probability distribution over a set of topics by inference, and the similarity between two messages is measured by the distance between their distributions. Then, the events are identified by conducting efficient similarity joins over social media streams. To accelerate the similarity join, we also propose a variable dimensional extendible hash over social streams. We have conducted extensive experiments to prove the high effectiveness and efficiency of the proposed approach.",
"User-contributed Web data contains rich and diverse information about a variety of events in the physical world, such as shows, festivals, conferences and more. This information ranges from known event features (e.g., title, time, location) posted on event aggregation platforms (e.g., Last.fm events, EventBrite, Facebook events) to discussions and reactions related to events shared on different social media sites (e.g., Twitter, YouTube, Flickr). In this paper, we focus on the challenge of automatically identifying user-contributed content for events that are planned and, therefore, known in advance, across different social media sites. We mine event aggregation platforms to extract event features, which are often noisy or missing. We use these features to develop query formulation strategies for retrieving content associated with an event on different social media sites. Further, we explore ways in which event content identified on one social media site can be used to retrieve additional relevant event content on other social media sites. We apply our strategies to a large set of user-contributed events, and analyze their effectiveness in retrieving relevant event content from Twitter, YouTube, and Flickr.",
"We describe a simple IR approach for linking news about events, detected by an event extraction system, to messages from Twitter (tweets). In particular, we explore several methods for creating event-specific queries for Twitter and provide a quantitative and qualitative evaluation of the relevance and usefulness of the information obtained from the tweets. We showed that methods based on utilization of word co-occurrence clustering, domain-specific keywords and named entity recognition improve the performance with respect to a basic approach."
]
} |
1702.07983 | 2593383075 | Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted. The fundamental reason is the difficulty of back-propagation through discrete random variables combined with the inherent instability of the GAN training objective. To address these problems, we propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks. Instead of directly optimizing the GAN objective, we derive a novel and low-variance objective using the discriminator's output that follows corresponds to the log-likelihood. Compared with the original, the new objective is proved to be consistent in theory and beneficial in practice. The experimental results on various discrete datasets demonstrate the effectiveness of the proposed approach. | To improve the performance of discrete auto-regressive models, some researchers aim to tackle the problem, which is discussed detailed in @cite_14 @cite_0 @cite_17 . The problem occurs when the training algorithm prohibits models to be exposed to their own predictions during training. The second issue is the discrepancy between the objective during training and the evaluation metric during testing, which is analyzed in and then summarized as by . Typically, the objectives in training auto-regressive models are to maximize the word-level probabilities, while in test-time, we often evaluate the models using sequence-level metrics, such as BLEU @cite_37 . To alleviate these two issues, the most straightforward way is to add the evaluation metrics into the objective in the training phase. Because these metrics are often discrete which cannot be utilized through standard back-propagation, researchers generally seek help from reinforcement learning. exploits REINFORCE algorithm @cite_24 and proposes several model variants to well situate the algorithm in text generation applications. shares similar idea and directly optimizes image caption metrics through policy gradient methods @cite_11 . There exists a third issue, namely , especially in sequence-to-sequence learning framework, which obstacles the MLE trained models to be optimized globally @cite_32 @cite_17 | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_11",
"@cite_32",
"@cite_0",
"@cite_24",
"@cite_17"
],
"mid": [
"2101105183",
"2176263492",
"2155027007",
"2311132329",
"2951580200",
"",
""
],
"abstract": [
"Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.",
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.",
"Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.",
"We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-of-speech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models.",
"We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.",
"",
""
]
} |
1702.07983 | 2593383075 | Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted. The fundamental reason is the difficulty of back-propagation through discrete random variables combined with the inherent instability of the GAN training objective. To address these problems, we propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks. Instead of directly optimizing the GAN objective, we derive a novel and low-variance objective using the discriminator's output that follows corresponds to the log-likelihood. Compared with the original, the new objective is proved to be consistent in theory and beneficial in practice. The experimental results on various discrete datasets demonstrate the effectiveness of the proposed approach. | To addresses the abovementioned issues in training auto-regressive models, we propose to formulate the problem under the setting of generative adversarial networks. Initially proposed by , generative adversarial network (GAN) has attracted a lot of attention because it provides a powerful framework to generate promising samples through a min-max game. Researchers have successfully applied GAN to generate promising images conditionally @cite_36 @cite_23 @cite_21 and unconditionally @cite_33 @cite_18 , to realize image manipulation and super-resolution @cite_29 @cite_10 @cite_35 , and to produce video sequences @cite_12 @cite_22 @cite_2 . Despite these successes, the feasibility and advantage on applying GAN to text generation are restrictedly explored yet noteworthy. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_29",
"@cite_21",
"@cite_23",
"@cite_2",
"@cite_10",
"@cite_12"
],
"mid": [
"2523714292",
"2951140085",
"2173520492",
"2951536054",
"2125389028",
"2951021768",
"2564591810",
"",
"2742479045",
"2950560720",
"2248556341"
],
"abstract": [
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, (2016) showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227x227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models \"Plug and Play Generative Networks\". PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable \"condition\" network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization, which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to \"fall off\" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles.",
"Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256x256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.",
"",
"In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs a set of latent variables, each of which corresponds to an image frame in a video. The image generator transforms a set of such latent variables into a video. To deal with instability in training of GAN with such advanced networks, we adopt a recently proposed model, Wasserstein GAN, and propose a novel method to train it stably in an end-to-end manner. The experimental results demonstrate the effectiveness of our methods.",
"Image super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high-resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Furthermore, MAP inference is often performed via optimisation-based iterative algorithms which don't compare well with the efficiency of neural-network-based alternatives. Here we introduce new methods for amortised MAP inference whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e.g. variational autoencoders.",
"Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset"
]
} |
1702.07983 | 2593383075 | Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted. The fundamental reason is the difficulty of back-propagation through discrete random variables combined with the inherent instability of the GAN training objective. To address these problems, we propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks. Instead of directly optimizing the GAN objective, we derive a novel and low-variance objective using the discriminator's output that follows corresponds to the log-likelihood. Compared with the original, the new objective is proved to be consistent in theory and beneficial in practice. The experimental results on various discrete datasets demonstrate the effectiveness of the proposed approach. | It is appealing to generate discrete sequences using GAN as discussed above. The generative models are able to utilize the discriminator's output to make up the information of its own distribution, which is inaccessible if trained by teacher forcing @cite_13 @cite_14 . However, it is nontrivial to train GAN on discrete data due to its discontinuity nature. The instability inherent in GAN training makes things even worse @cite_3 @cite_19 @cite_6 @cite_20 . exploits adversarial domain adaption to regularize the training of recurrent neural networks. applies GAN to discrete sequence generation by directly optimizing the discrete discriminator's rewards. They adopt Monte Carlo tree search technique @cite_4 . Similar technique has been employed in which improves response generation by using adversarial learning. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_3",
"@cite_19",
"@cite_13",
"@cite_20"
],
"mid": [
"2176263492",
"2257979135",
"",
"2432004435",
"2585630030",
"2016589492",
""
],
"abstract": [
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.",
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.",
"",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.",
"Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.",
"The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms allow networks having recurrent connections to learn complex tasks that require the retention of information over time periods having either fixed or indefinite length.",
""
]
} |
1702.07956 | 2593021375 | We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed. We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from various numerical experiments to demonstrate the effectiveness the proposed approach. In some settings, the proposed algorithm outperforms traditional pool-based approaches. To the best our knowledge, this is the first active learning work using GAN. | The popular SVM @math algorithm from @cite_16 is an efficient pool-based active learning scheme for SVM. Their scheme is a special instance of the uncertainty sampling principle which we also employ. @cite_44 reduces the exhaustive scanning through database employed by SVM @math . Our algorithm shares the same advantage of not needing to test every sample in the database at each iteration of active learning. Although we do so by not using a pool at all instead of a clever trick. @cite_53 proposed active transfer learning which is reminiscent to our experiments in . However, we do not consider collecting new labeled data in target domains of transfer learning. | {
"cite_N": [
"@cite_44",
"@cite_53",
"@cite_16"
],
"mid": [
"2157575532",
"",
"2426031434"
],
"abstract": [
"We consider the problem of retrieving the database points nearest to a given hyperplane query without exhaustively scanning the entire database. For this problem, we propose two hashing-based solutions. Our first approach maps the data to 2-bit binary keys that are locality sensitive for the angle between the hyperplane normal and a database point. Our second approach embeds the data into a vector space where the euclidean norm reflects the desired distance between the original points and hyperplane query. Both use hashing to retrieve near points in sublinear time. Our first method's preprocessing stage is more efficient, while the second has stronger accuracy guarantees. We apply both to pool-based active learning: Taking the current hyperplane classifier as a query, our algorithm identifies those points (approximately) satisfying the well-known minimal distance-to-hyperplane selection criterion. We empirically demonstrate our methods' tradeoffs and show that they make it practical to perform active selection with millions of unlabeled points.",
"",
"Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings."
]
} |
1702.07956 | 2593021375 | We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed. We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from various numerical experiments to demonstrate the effectiveness the proposed approach. In some settings, the proposed algorithm outperforms traditional pool-based approaches. To the best our knowledge, this is the first active learning work using GAN. | In a way, the proposed approach can be viewed as an adversarial training procedure @cite_35 , where the classifier is iteratively trained on the adversarial example generated by the algorithm based on solving an optimization problem. @cite_35 focuses on the adversarial examples that are generated by perturbing the original datasets within the small epsilon-ball whereas we seek to produce examples using active learning criterion. | {
"cite_N": [
"@cite_35"
],
"mid": [
"1945616565"
],
"abstract": [
"Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset."
]
} |
1702.07956 | 2593021375 | We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed. We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from various numerical experiments to demonstrate the effectiveness the proposed approach. In some settings, the proposed algorithm outperforms traditional pool-based approaches. To the best our knowledge, this is the first active learning work using GAN. | To the best of our knowledge, the only previous mentioning of using GAN for active learning is in the appendix of @cite_37 . The authors discussed therein three attempts to reduce the number of queries. In the third attempt, they generated synthetic samples and sorted them by the information content whereas we adaptively generate new queries by solving an optimization problem. There were no reported active learning numerical results in that work. | {
"cite_N": [
"@cite_37"
],
"mid": [
"2950602864"
],
"abstract": [
"Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as \"teachers\" for a \"student\" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning."
]
} |
1702.08013 | 2125043701 | The present paper introduces the initial implementation of a software exploration tool targeting graphical user interface (GUI) driven applications. GUITracer facilitates the comprehension of GUI-driven applications by starting from their most conspicuous artefact - the user interface itself. The current implementation of the tool can be used with any Java-based target application that employs one of the AWT, Swing or SWT toolkits. The tool transparently instruments the target application and provides real time information about the GUI events fired. For each event, call relations within the application are displayed at method, class or package level, together with detailed coverage information. The tool facilitates feature location, program comprehension as well as GUI test creation by revealing the link between the application's GUI and its underlying code. As such, GUITracer is intended for software practitioners developing or maintaining GUI-driven applications. We believe our tool to be especially useful for entry-level practitioners as well as students seeking to understand complex GUI-driven software systems. The present paper details the rationale as well as the technical implementation of the tool. As a proof-of-concept implementation, we also discuss further development that can lead to our tool's integration into a software development workflow. | One of the key challenges of implementing our tool was accurately capturing GUI event information. As this is a software tracing task, we studied previous efforts targeting Java, such as the JMonitor library developed by Karaorman and Freeman @cite_9 . JMonitor provides event monitoring for Java by specifying and . Patterns are used to describe interesting events, and monitors act as handlers that are called once the events have taken place. The proposed library provides a generic implementation for lowest-level events such as setting the value of a class field or a method call. Another notable example is JRapture @cite_14 , a tool for capturing and replaying Java program executions by recording interactions between the program itself and the system, using accurately reproduced input sequences. Profiling can then be added to study the application during replay. JETracer differentiates itself from these tools by working on a higher abstraction level and being developed to record GUI events. This allows capturing additional information such as application screenshots as well as event listener information. | {
"cite_N": [
"@cite_9",
"@cite_14"
],
"mid": [
"2041252769",
"2160329567"
],
"abstract": [
"jMonitor is a pure Java library and runtime utility for specifying event patterns and associating them with user provided event monitors that get called when the specified runtime events occur during the execution of legacy Java applications. jMonitor APIs define an event specification abstraction layer allowing programmers to design event patterns to monitor runtime execution of legacy Java applications. jMonitor instrumentation works at the Java bytecode level and does not require the presence of source code for the Java application that is being monitored. jMonitor overloads the dynamic class loader and takes the event specification and monitors (in the form of Java class files) as additional arguments when launching the target Java application. The class bytecodes of the monitored Java program are instrumented on the fly by the jMonitor class loader according to the needs of the externally specified jMonitor event patterns and event monitors.",
"We describe the design of jRapture: a tool for capturing and replaying Java program executions in the field. jRapture works with Java binaries (byte code) and any compliant implementation of the Java virtual machine. It employs a lightweight, transparent capture process that permits unobtrusive capture of a Java programs executions. jRapture captures interactions between a Java program and the system, including GUI, file, and console inputs, among other types, and on replay it presents each thread with exactly the same input sequence it saw during capture. In addition, jRapture has a profiling interface that permits a Java program to be instrumented for profiling o after its executions have been captured. Using an XML-based profiling specification language a tester can specify various forms of profiling to be carried out during replay."
]
} |
1702.07784 | 2952569145 | Over the past few years, online aggression and abusive behaviors have occurred in many different forms and on a variety of platforms. In extreme cases, these incidents have evolved into hate, discrimination, and bullying, and even materialized into real-world threats and attacks against individuals or groups. In this paper, we study the Gamergate controversy. Started in August 2014 in the online gaming world, it quickly spread across various social networking platforms, ultimately leading to many incidents of cyberbullying and cyberaggression. We focus on Twitter, presenting a measurement study of a dataset of 340k unique users and 1.6M tweets to study the properties of these users, the content they post, and how they differ from random Twitter users. We find that users involved in this "Twitter war" tend to have more friends and followers, are generally more engaged and post tweets with negative sentiment, less joy, and more hate than random users. We also perform preliminary measurements on how the Twitter suspension mechanism deals with such abusive behaviors. While we focus on Gamergate, our methodology to collect and analyze tweets related to aggressive and bullying activities is of independent interest. | Detecting abusive behavior. @cite_28 aim to detect offensive content and potential offensive users by analyzing YouTube comments. Then, @cite_17 @cite_27 turn to cyberbullying on Instagram and Ask.fm. Specifically, in @cite_27 , besides considering available text information, they also try to associate the topic of an image (e.g., drugs, celebrity, sports, etc.) to possible cyberbullying events, concluding that drugs are highly associated with cyberbullying. Also, in a effort to create a suitable dataset for their analysis, at first the authors collected a large number of media sessions -- i.e., videos and images along with comments -- from Instagram public profiles, with a subset selected for labeling. To ensure that an adequate number of cyberbullying instances will be present in the dataset, they selected media sessions with at least one profanity word. Finally, they relied on the CrowdFlower crowdsourcing platform to determine whether or not such sessions are related with cyberbullying or cyberaggression. @cite_17 authors leveraged both likes and comments to identify negative behavior in the Ask.fm social network. Here, their dataset was created by exploiting publicly accessible profiles, e.g. questions, answers, and likes. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_17"
],
"mid": [
"2160685721",
"2216854803",
"2949557692"
],
"abstract": [
"Since the textual contents on online social media are highly unstructured, informal, and often misspelled, existing research on message-level offensive language detection cannot accurately detect offensive content. Meanwhile, user-level offensiveness detection seems a more feasible approach but it is an under researched area. To bridge this gap, we propose the Lexical Syntactic Feature (LSF) architecture to detect offensive content and identify potential offensive users in social media. We distinguish the contribution of pejoratives profanities and obscenities in determining offensive content, and introduce hand-authoring syntactic rules in identifying name-calling harassments. In particular, we incorporate a user's writing style, structure and specific cyber bullying content as features to predict the user's potentiality to send out offensive content. Results from experiments showed that our LSF framework performed significantly better than existing methods in offensive content detection. It achieves precision of 98.24 and recall of 94.34 in sentence offensive detection, as well as precision of 77.9 and recall of 77.8 in user offensive detection. Meanwhile, the processing speed of LSF is approximately 10msec per sentence, suggesting the potential for effective deployment in social media.",
"Cyberbullying is a growing problem affecting more than half of all American teens. The main goal of this paper is to study labeled cyberbullying incidents in the Instagram social network. In this work, we have collected a sample data set consisting of Instagram images and their associated comments. We then designed a labeling study and employed human contributors at the crowd-sourced CrowdFlower website to label these media sessions for cyberbullying. A detailed analysis of the labeled data is then presented, including a study of relationships between cyberbullying and a host of features such as cyberaggression, profanity, social graph features, temporal commenting behavior, linguistic content, and image content.",
"Cyberbullying has emerged as an important and growing social problem, wherein people use online social networks and mobile phones to bully victims with offensive text, images, audio and video on a 247 basis. This paper studies negative user behavior in the Ask.fm social network, a popular new site that has led to many cases of cyberbullying, some leading to suicidal behavior.We examine the occurrence of negative words in Ask.fms question+answer profiles along with the social network of likes of questions+answers. We also examine properties of users with cutting behavior in this social network."
]
} |
1702.07784 | 2952569145 | Over the past few years, online aggression and abusive behaviors have occurred in many different forms and on a variety of platforms. In extreme cases, these incidents have evolved into hate, discrimination, and bullying, and even materialized into real-world threats and attacks against individuals or groups. In this paper, we study the Gamergate controversy. Started in August 2014 in the online gaming world, it quickly spread across various social networking platforms, ultimately leading to many incidents of cyberbullying and cyberaggression. We focus on Twitter, presenting a measurement study of a dataset of 340k unique users and 1.6M tweets to study the properties of these users, the content they post, and how they differ from random Twitter users. We find that users involved in this "Twitter war" tend to have more friends and followers, are generally more engaged and post tweets with negative sentiment, less joy, and more hate than random users. We also perform preliminary measurements on how the Twitter suspension mechanism deals with such abusive behaviors. While we focus on Gamergate, our methodology to collect and analyze tweets related to aggressive and bullying activities is of independent interest. | @cite_25 focus on a Community-based Question-Answering (CQA) site, Yahoo Answers, finding that users tend to flag abusive content posted in an overwhelmingly correct way, while in @cite_24 , the problem of cyberbullying is further decomposed to sensitive topics related to race and culture, sexuality, and intelligence, using YouTube comments extracted from controversial videos. @cite_21 also study specific types of cyberbullying, e.g., threats and insults, on Dutch posts extracted from Ask.fm social media. They also highlight three main user behaviors, harasser, victim, and bystander -- either bystander-defender or bystander-assistant who support the victim or the harasser, respectively. Their dataset was created by crawling a number of seed sites from Ask.fm, with a limited number of cyberbullying instances. They complement the data with more cyberbullying related content by: (i) launching a campaign where people reported personal cases of cyberbullying taking place in different platforms, i.e., Facebook, message board posts and chats, and (ii) by designing a role-playing game involving a cyberbullying simulation on Facebook. Then, they ask manual annotators to characterize content as being part of a cyberbullying event, and indicate the author's role in such event, i.e., victim, harasser, bystander-defender, or bystander-assistant. | {
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_25"
],
"mid": [
"2209227144",
"2283668614",
""
],
"abstract": [
"The scourge of cyberbullying has assumed alarming proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or as a bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. Comments or posts involving sensitive topics that are personal to an individual are more likely to be internalized by a victim, often resulting in tragic outcomes. We decompose the overall detection problem into detection of sensitive topics, lending itself into text classification sub-problems. We experiment with a corpus of 4500 YouTube comments, applying a range of binary and multiclass classifiers. We find that binary classifiers for individual labels outperform multiclass classifiers. Our findings show that the detection of textual cyberbullying can be tackled by building individual topic-sensitive classifiers.",
"The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20 to 40 of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.",
""
]
} |
1702.07544 | 2593056981 | We propose distributed online open loop planning (DOOLP), a general framework for online multiagent coordination and decision making under uncertainty. DOOLP is based on online heuristic search in the space defined by a generative model of the domain dynamics, which is exploited by agents to simulate and evaluate the consequences of their potential choices. We also propose distributed online Thompson sampling (DOTS) as an effective instantiation of the DOOLP framework. DOTS models sequences of agent choices by concatenating a number of multiarmed bandits for each agent and uses Thompson sampling for dealing with action value uncertainty. The Bayesian approach underlying Thompson sampling allows to effectively model and estimate uncertainty about (a) own action values and (b) other agents' behavior. This approach yields a principled and statistically sound solution to the exploration-exploitation dilemma when exploring large search spaces with limited resources. We implemented DOTS in a smart factory case study with positive empirical results. We observed effective, robust and scalable planning and coordination capabilities even when only searching a fraction of the potential search space. | Online planning, or local search, repeatedly interleaves a planning loop with an execution loop @cite_1 @cite_8 . An online planning agent requires a probabilistic generative model of its domain dynamics, such as a transition distribution of a DecMDP. Online planning consists of two loops with different frequencies. A high-frequency planning loop samples a plan from a stochastic policy, simulates its consequences w.r.t. some optimization objective (e.g. the expected cumulative reward) and updates the policy in order to increase the probability of generating useful plans w.r.t. the given notion of utility. A low-frequency execution loop consists of sensing the current state of the environment, planning by repeating the inner loop multiple times, and executing the currently most viable action determined by the planning loop. Online planning with its two loops is informally shown in Figure . | {
"cite_N": [
"@cite_1",
"@cite_8"
],
"mid": [
"2197494948",
"2406044115"
],
"abstract": [
"We focus on effective sample-based planning in the face of underactuation, high-dimensionality, drift, discrete system changes, and stochasticity. These are hallmark challenges for important problems, such as humanoid locomotion. In order to ensure broad applicability, we assume domain expertise is minimal and limited to a generative model. In order to make the method responsive, computational costs that scale linearly with the amount of samples taken from the generative model are required. We bring to bear a concrete method that satisfies all these requirements; it is a receding-horizon open-loop planner that employs cross-entropy optimization for policy construction. In simulation, we empirically demonstrate near-optimal decisions in a small domain and effective locomotion in several challenging humanoid control tasks.",
"This paper proposes the OnPlan framework for modeling autonomous systems operating in domains with large probabilistic state spaces and high branching factors. The framework defines components for acting and deliberation, and specifies their interactions. It comprises a mathematical specification of requirements for autonomous systems. We discuss the role of such a specification in the context of simulation-based online planning. We also consider two instantiations of the framework: Monte Carlo Tree Search for discrete domains, and Cross Entropy Open Loop Planning for continuous state and action spaces. The framework's ability to provide system autonomy is illustrated empirically on a robotic rescue example."
]
} |
1702.07544 | 2593056981 | We propose distributed online open loop planning (DOOLP), a general framework for online multiagent coordination and decision making under uncertainty. DOOLP is based on online heuristic search in the space defined by a generative model of the domain dynamics, which is exploited by agents to simulate and evaluate the consequences of their potential choices. We also propose distributed online Thompson sampling (DOTS) as an effective instantiation of the DOOLP framework. DOTS models sequences of agent choices by concatenating a number of multiarmed bandits for each agent and uses Thompson sampling for dealing with action value uncertainty. The Bayesian approach underlying Thompson sampling allows to effectively model and estimate uncertainty about (a) own action values and (b) other agents' behavior. This approach yields a principled and statistically sound solution to the exploration-exploitation dilemma when exploring large search spaces with limited resources. We implemented DOTS in a smart factory case study with positive empirical results. We observed effective, robust and scalable planning and coordination capabilities even when only searching a fraction of the potential search space. | Open loop planning is an approach to determine policies optimized w.r.t. some objective without storing information about the states that are intermediately encountered while simulating policy execution for evaluation purposes @cite_7 @cite_1 . Given a set of actions @math , we are only interested in finding a plan @math , and we are only keeping information about the action sequences in order to guide the planning process. That is, we reformulate the policy to @math , now being a distribution of sequences of length @math given a current state @math . | {
"cite_N": [
"@cite_1",
"@cite_7"
],
"mid": [
"2197494948",
"2169511307"
],
"abstract": [
"We focus on effective sample-based planning in the face of underactuation, high-dimensionality, drift, discrete system changes, and stochasticity. These are hallmark challenges for important problems, such as humanoid locomotion. In order to ensure broad applicability, we assume domain expertise is minimal and limited to a generative model. In order to make the method responsive, computational costs that scale linearly with the amount of samples taken from the generative model are required. We bring to bear a concrete method that satisfies all these requirements; it is a receding-horizon open-loop planner that employs cross-entropy optimization for policy construction. In simulation, we empirically demonstrate near-optimal decisions in a small domain and effective locomotion in several challenging humanoid control tasks.",
"We consider the problem of planning in a stochastic and discounted environment with a limited numerical budget. More precisely, we investigate strategies exploring the set of possible sequences of actions, so that, once all available numerical resources (e.g. CPU time, number of calls to a generative model) have been used, one returns a recommendation on the best possible immediate action to follow based on this exploration. The performance of a strategy is assessed in terms of its simple regret, that is the loss in performance resulting from choosing the recommended action instead of an optimal one. We first provide a minimax lower bound for this problem, and show that a uniform planning strategy matches this minimax rate (up to a logarithmic factor). Then we propose a UCB (Upper Confidence Bounds)-based planning algorithm, called OLOP (Open-Loop Optimistic Planning), which is also minimax optimal, and prove that it enjoys much faster rates when there is a small proportion of near-optimal sequences of actions. Finally, we compare our results with the regret bounds one can derive for our setting with bandits algorithms designed for an infinite number of arms."
]
} |
1702.07544 | 2593056981 | We propose distributed online open loop planning (DOOLP), a general framework for online multiagent coordination and decision making under uncertainty. DOOLP is based on online heuristic search in the space defined by a generative model of the domain dynamics, which is exploited by agents to simulate and evaluate the consequences of their potential choices. We also propose distributed online Thompson sampling (DOTS) as an effective instantiation of the DOOLP framework. DOTS models sequences of agent choices by concatenating a number of multiarmed bandits for each agent and uses Thompson sampling for dealing with action value uncertainty. The Bayesian approach underlying Thompson sampling allows to effectively model and estimate uncertainty about (a) own action values and (b) other agents' behavior. This approach yields a principled and statistically sound solution to the exploration-exploitation dilemma when exploring large search spaces with limited resources. We implemented DOTS in a smart factory case study with positive empirical results. We observed effective, robust and scalable planning and coordination capabilities even when only searching a fraction of the potential search space. | An MAB can be interpreted as a simple Markov decision process with a single state. In their basic formulation, MABs already provide a clear framework for studying the exploration-exploitation tradeoff inherent to decision making under uncertainty: Should the agent select the arm that previously showed to be most promising? Or should it go on exploring other options? For a recent survey of MAB and its variants, see @cite_6 . | {
"cite_N": [
"@cite_6"
],
"mid": [
"1544426622"
],
"abstract": [
"Although many algorithms for the multi-armed bandit problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as epsilon-greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. Thirdly, the algorithms' performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50 more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies."
]
} |
1702.07285 | 2951883087 | Emojis are ideograms which are naturally combined with plain text to visually complement or condense the meaning of a message. Despite being widely used in social media, their underlying semantics have received little attention from a Natural Language Processing standpoint. In this paper, we investigate the relation between words and emojis, studying the novel task of predicting which emojis are evoked by text-based tweet messages. We train several models based on Long Short-Term Memory networks (LSTMs) in this task. Our experimental results show that our neural model outperforms two baselines as well as humans solving the same task, suggesting that computational models are able to better capture the underlying semantics of emojis. | The most similar model is by , which attempts to predict the hashtag given a tweet. This is a similar task with a similar motivation towards better language understanding. They present a character composition model for tweets. The main difference with their work is that they encode the entire tweet as a sequence of characters with bidirectional GRUs @cite_21 while we encode each token as a sequence of characters with bidirectional LSTMs obtaining embeddings of tokens, and each tweet as a bidirectional sequence (again with LSTMs) of word embeddings avoiding sparsity and with the potentiality of being applied to longer sequences. It is to be noted that our model can also be used as a way of getting pretrained embeddings of tweets, by extracting the learned vectors from our bidirectional LSTMs. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1924770834"
],
"abstract": [
"In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the tasks of polyphonic music modeling and speech signal modeling. Our experiments revealed that these advanced recurrent units are indeed better than more traditional recurrent units such as tanh units. Also, we found GRU to be comparable to LSTM."
]
} |
1702.07445 | 2950032996 | Many data mining approaches aim at modelling and predicting human behaviour. An important quantity of interest is the quality of model-based predictions, e.g. for finding a competition winner with best prediction performance. In real life, human beings meet their decisions with considerable uncertainty. Its assessment and resulting implications for statistically evident evaluation of predictive models are in the main focus of this contribution. We identify relevant sources of uncertainty as well as the limited ability of its accurate measurement, propose an uncertainty-aware methodology for more evident evaluations of data mining approaches, and discuss its implications for existing quality assessment strategies. Specifically, our approach switches from common point-paradigm to more appropriate distribution-paradigm. This is exemplified in the context of recommender systems and their established metrics of prediction quality. The discussion is substantiated by comprehensive experiments with real users, large-scale simulations, and discussion of prior evaluation campaigns (i.a. Netflix Prize) in the light of human uncertainty aspects. | In the context of this paper, we exemplify our approach by scenarios from the field of recommender systems as summarised in @cite_4 and focus specifically on comparative evaluation metrics. Recommender systems were initially based on demographic, content-based and collaborative filtering. An overview of these techniques are given in @cite_19 . As collaborative filtering recently turned out to be one of the most successful techniques, they rapidly got into the centre of further research. A roadmap to collaborative filtering as well as a profound discussion on its predictive performance is provided by @cite_3 . | {
"cite_N": [
"@cite_19",
"@cite_4",
"@cite_3"
],
"mid": [
"2025605741",
"1690919088",
"2100235918"
],
"abstract": [
"Recommender systems have developed in parallel with the web. They were initially based on demographic, content-based and collaborative filtering. Currently, these systems are incorporating social information. In the future, they will use implicit, local and personal information from the Internet of things. This article provides an overview of recommender systems as well as collaborative filtering methods and algorithms; it also explains their evolution, provides an original classification for these systems, identifies areas of future implementation and develops certain areas selected for past, present or future importance.",
"The explosive growth of e-commerce and online environments has made the issue of information search and selection increasingly serious; users are overloaded by options to consider and they may not have the time or knowledge to personally evaluate these options. Recommender systems have proven to be a valuable way for online users to cope with the information overload and have become one of the most powerful and popular tools in electronic commerce. Correspondingly, various techniques for recommendation generation have been proposed. During the last decade, many of them have also been successfully deployed in commercial environments. Recommender Systems Handbook, an edited volume, is a multi-disciplinary effort that involves world-wide experts from diverse fields, such as artificial intelligence, human computer interaction, information technology, data mining, statistics, adaptive user interfaces, decision support systems, marketing, and consumer behavior. Theoreticians and practitioners from these fields continually seek techniques for more efficient, cost-effective and accurate recommender systems. This handbook aims to impose a degree of order on this diversity, by presenting a coherent and unified repository of recommender systems major concepts, theories, methodologies, trends, challenges and applications. Extensive artificial applications, a variety of real-world applications, and detailed case studies are included. Recommender Systems Handbook illustrates how this technology can support the user in decision-making, planning and purchasing processes. It works for well known corporations such as Amazon, Google, Microsoft and AT&T. This handbook is suitable for researchers and advanced-level students in computer science as a reference.",
"As one of the most successful approaches to building recommender systems, collaborative filtering (CF) uses the known preferences of a group of users to make recommendations or predictions of the unknown preferences for other users. In this paper, we first introduce CF tasks and their main challenges, such as data sparsity, scalability, synonymy, gray sheep, shilling attacks, privacy protection, etc., and their possible solutions. We then present three main categories of CF techniques: memory-based, modelbased, and hybrid CF algorithms (that combine CF with other recommendation techniques), with examples for representative algorithms of each category, and analysis of their predictive performance and their ability to address the challenges. From basic techniques to the state-of-the-art, we attempt to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area."
]
} |
1702.07445 | 2950032996 | Many data mining approaches aim at modelling and predicting human behaviour. An important quantity of interest is the quality of model-based predictions, e.g. for finding a competition winner with best prediction performance. In real life, human beings meet their decisions with considerable uncertainty. Its assessment and resulting implications for statistically evident evaluation of predictive models are in the main focus of this contribution. We identify relevant sources of uncertainty as well as the limited ability of its accurate measurement, propose an uncertainty-aware methodology for more evident evaluations of data mining approaches, and discuss its implications for existing quality assessment strategies. Specifically, our approach switches from common point-paradigm to more appropriate distribution-paradigm. This is exemplified in the context of recommender systems and their established metrics of prediction quality. The discussion is substantiated by comprehensive experiments with real users, large-scale simulations, and discussion of prior evaluation campaigns (i.a. Netflix Prize) in the light of human uncertainty aspects. | The complexity of human perception and cognition can be addressed by means of latent distributions (see @cite_13 ), resulting in varying observations. This idea is widely used in cognitive science and in statistical models for ordinal data. For example, so-called CUB models for ordinal data @cite_5 assume the Gaussian as a latent response model underlying the observations. We adopt the idea of modelling user uncertainty by means of individual Gaussians following the argumentation in @cite_5 for constructing our own response models. | {
"cite_N": [
"@cite_5",
"@cite_13"
],
"mid": [
"2060074015",
"2167797302"
],
"abstract": [
"In this article we introduce a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion, possibly present in ordinal data surveys. The choice of the components of the new model is motivated by a study on the data generating process. Inferential issues concerning the maximum likelihood estimates and the validation steps are presented; then, some empirical analyses are given to support the usefulness of the approach. Discussion on further extensions of the model ends the article.",
"I describe a cognitive latent variable model, a combination of a cognitive model and a latent variable model that can be used to aggregate information regarding cognitive parameters across participants and tasks. The model is ideally suited for uncovering relationships between latent task abilities as they are expressed in experimental paradigms, but can also be used as data fusion tools to connect latent abilities with external covariates from entirely different data sources. An example application deals with the structure of cognitive abilities underlying an executive functioning task and its relation to personality traits. © 2014 Elsevier Inc."
]
} |
1702.06709 | 2953001446 | Fine-grained entity type classification (FETC) is the task of classifying an entity mention to a broad set of types. Distant supervision paradigm is extensively used to generate training data for this task. However, generated training data assigns same set of labels to every mention of an entity without considering its local context. Existing FETC systems have two major drawbacks: assuming training data to be noise free and use of hand crafted features. Our work overcomes both drawbacks. We propose a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features. Our model treats training data as noisy and uses non-parametric variant of hinge loss function. Experiments show that the proposed model outperforms previous state-of-the-art methods on two publicly available datasets, namely FIGER (GOLD) and BBN with an average relative improvement of 2.69 in micro-F1 score. Knowledge learnt by our model on one dataset can be transferred to other datasets while using same model or other FETC systems. These approaches of transferring knowledge further improve the performance of respective models. | Transfer learning is well applied to many NLP applications, such as cross-domain document classification @cite_28 , multi-lingual word clustering @cite_2 and sentiment classification @cite_26 . Initialization of word vectors with pre-trained word vectors in neural network models can be considered as one of the best example of transfer learning in NLP @. provide a broad overview of transfer learning techniques used for language processing. | {
"cite_N": [
"@cite_28",
"@cite_26",
"@cite_2"
],
"mid": [
"2148861942",
"591148856",
"2250539671"
],
"abstract": [
"In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semi-supervised learning, and adapt the translated model to better fit the data distribution of the target language.",
"Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems. This paper addresses the problem of distilling word embeddings for NLP tasks. We propose an encoding approach to distill task-specific knowledge from a set of high-dimensional embeddings, so that we can reduce model complexity by a large margin as well as retain high accuracy, achieving a good compromise between efficiency and performance. Experiments reveal the phenomenon that distilling knowledge from cumbersome embeddings is better than directly training neural networks with small embeddings.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition."
]
} |
1702.06973 | 2592328946 | This paper introduces the Java Software Evolution Tracker, a visualization and analysis tool that provides practitioners the means to examine the evolution of a software system from a top to bottom perspective, starting with changes in the graphical user interface all the way to source code modifications. | A more advanced approach was undertaken in @cite_22 where the author presents a call graph comparison tool that ranks differences according to their importance. The same paper also introduces a browser application for navigating call graphs, similar to JAnalyzer. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2095802649"
],
"abstract": [
"Comparing program analysis results from different static and dynamic analysis tools is difficult and therefore too rare, especially when it comes to qualitative comparison. Analysis results can be strongly affected by specific details of programs being analyzed, so quantitative evaluation should be supplemented by qualitative identification of those details. Our general aim is to develop tools to reduce the difficulty of qualitative comparison. In this paper, we focus on comparison of call graphs in particular. We present two complementary tools for comparing call graphs. Our main contribution is a call graph difference search tool that ranks call graph edges by their likelihood of causing large differences in the call graphs. This is complemented by a simple interactive call graph viewer that highlights specific differences between call graphs, and allows a user to browse through them. In a search for the causes of call graph differences, a user first uses the search tool to identify which of the thousands of spurious edges to look at more closely, and then uses the interactive viewer to determine in detail the root cause of a difference. We present the ranking algorithm used in the difference search tool. We also report on a case study using the comparison tools to determine the most important sources of imprecision in a typical static call graph by comparing it to a dynamic call graph of the same benchmark."
]
} |
1702.06973 | 2592328946 | This paper introduces the Java Software Evolution Tracker, a visualization and analysis tool that provides practitioners the means to examine the evolution of a software system from a top to bottom perspective, starting with changes in the graphical user interface all the way to source code modifications. | More recent approaches have attempted to enrich IDE software with visualization capabilities. One of these approaches is Code Bubbles, developed by @cite_25 . Code Bubbles proposes a unitary view of a program's sources increasing developer productivity and minimizing overhead. Altough not a software visualizer per se, Code Bubbles proposes a tight integration of visualization tools with modern IDEs for maximum efficiency. Building on this effort, Microsoft Research integrated Code Canvas @cite_23 into Visual Studio 2010. Code Canvas provides a unified view of the source code together with all related information for easy synthetization of information. | {
"cite_N": [
"@cite_25",
"@cite_23"
],
"mid": [
"2148389674",
"1982889224"
],
"abstract": [
"Developers spend significant time reading and navigating code fragments spread across multiple locations. The file-based nature of contemporary IDEs makes it prohibitively difficult to create and maintain a simultaneous view of such fragments. We propose a novel user interface metaphor for code understanding based on collections of lightweight, editable fragments called bubbles, which form concurrently visible working sets. We present the results of a qualitative usability evaluation, and the results of a quantitative study which indicates Code Bubbles significantly improved code understanding time, while reducing navigation interactions over a widely-used IDE, for two controlled tasks.",
"Jane is a developer who has been on her team so long that everyone calls her the team historian. Since the product just shipped a few weeks ago, Jane is finally getting around to some code cleanup she had planned for ages—namely, dropping a dependency on a library that is no longer supported. Jane uses her development environment to search for all the places where her product uses the unsupported library. She clicks through the results one by one and reads the code to understand how it uses the library. As she jumps around the code base, she sketches a class diagram on a notepad to capture the architectural dependencies she discovers. Partway through this code-understanding task, there’s a knock at the door. It’s Joe, the newest member of the team. He is working on a bug and is confused about how one of the product’s features is implemented. As the team historian, Jane is used to this type of question. They start the conversation by looking at an architectural diagram tacked to the wall near Jane’s computer. To get into specifics, Jane draws a version of the diagram on the whiteboard, sketching only the relevant parts of the architecture but in more detail than the printed diagram. As she talks Joe through a use case, she overlays the diagram with arrows to show how different parts of the system interact. From time to time, she brings up relevant code in her development environment to relate the diagram back to the code. After several minutes, Joe feels confident he understands the design and heads back to his office. Jane goes back to her own work. Between exploring the search results and answering Joe’s questions, Jane’s development environment now has dozens of open documents. Jane tries to resume her task but cannot find where she left off in all the clutter. She closes all open documents, reissues her original search, finds her place in the search results, and carries on exploring the dependency on the unsupported library."
]
} |
1702.06973 | 2592328946 | This paper introduces the Java Software Evolution Tracker, a visualization and analysis tool that provides practitioners the means to examine the evolution of a software system from a top to bottom perspective, starting with changes in the graphical user interface all the way to source code modifications. | For the interested reader, a detailed evaluation concerning software visualizers that takes into account effectiveness and presentation techniques is avalable in @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"2014707001"
],
"abstract": [
"We provide an evaluation of 15 software visualization tools applicable to corrective maintenance. The tasks supported as well as the techniques used are presented and graded based on the support level. By analyzing user acceptation of current tools, we aim to help developers to select what to consider, avoid or improve in their next releases. Tool users can also recognize what to broadly expect (and what not) from such tools, thereby supporting an informed choice for the tools evaluated here and for similar tools."
]
} |
1702.06877 | 2950720316 | In recent years, bullying and aggression against users on social media have grown significantly, causing serious consequences to victims of all demographics. In particular, cyberbullying affects more than half of young social media users worldwide, and has also led to teenage suicides, prompted by prolonged and or coordinated digital harassment. Nonetheless, tools and technologies for understanding and mitigating it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of cyberbullies and aggressors, and what features distinguish them from regular users. We find that bully users post less, participate in fewer online communities, and are less popular than normal users, while aggressors are quite popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, achieving over 90 AUC. | Over the past few years, several techniques have been proposed to measure and detect offensive or abusive content behavior on platforms like Instagram @cite_53 , YouTube @cite_55 , 4chan @cite_1 , Yahoo Finance @cite_0 , and Yahoo Answers @cite_48 . @cite_55 use both textual and structural features (e.g., ratio of imperative sentences, adjective and adverbs as offensive words) to predict a user's aptitude in producing offensive content in YouTube comments, while @cite_0 rely on word embeddings to distinguish abusive comments on Yahoo Finance. @cite_3 perform hate speech detection on Yahoo Finance and News data, using supervised learning classification. @cite_48 find that users tend to flag abusive content posted on Yahoo Answers in an overwhelmingly correct way (as confirmed by human annotators). Also, some users significantly deviate from community norms, posting a large amount of content that is flagged as abusive. Through careful feature extraction, they also show it is possible to use machine learning methods to predict which users will be suspended. | {
"cite_N": [
"@cite_48",
"@cite_55",
"@cite_53",
"@cite_1",
"@cite_3",
"@cite_0"
],
"mid": [
"",
"2160685721",
"1964767137",
"2164777277",
"",
"1071251684"
],
"abstract": [
"",
"Since the textual contents on online social media are highly unstructured, informal, and often misspelled, existing research on message-level offensive language detection cannot accurately detect offensive content. Meanwhile, user-level offensiveness detection seems a more feasible approach but it is an under researched area. To bridge this gap, we propose the Lexical Syntactic Feature (LSF) architecture to detect offensive content and identify potential offensive users in social media. We distinguish the contribution of pejoratives profanities and obscenities in determining offensive content, and introduce hand-authoring syntactic rules in identifying name-calling harassments. In particular, we incorporate a user's writing style, structure and specific cyber bullying content as features to predict the user's potentiality to send out offensive content. Results from experiments showed that our LSF framework performed significantly better than existing methods in offensive content detection. It achieves precision of 98.24 and recall of 94.34 in sentence offensive detection, as well as precision of 77.9 and recall of 77.8 in user offensive detection. Meanwhile, the processing speed of LSF is approximately 10msec per sentence, suggesting the potential for effective deployment in social media.",
"Online gaming is a multi-billion dollar industry that entertains a large, global population. One unfortunate phenomenon, however, poisons the competition and the fun: cheating. The costs of cheating span from industry-supported expenditures to detect and limit cheating, to victims' monetary losses due to cyber crime. This paper studies cheaters in the Steam Community, an online social network built on top of the world's dominant digital game delivery platform. We collected information about more than 12 million gamers connected in a global social network, of which more than 700 thousand have their profiles flagged as cheaters. We also collected in-game interaction data of over 10 thousand players from a popular multiplayer gaming server. We show that cheaters are well embedded in the social and interaction networks: their network position is largely indistinguishable from that of fair players. We observe that the cheating behavior appears to spread through a social mechanism: the presence and the number of cheater friends of a fair player is correlated with the likelihood of her becoming a cheater in the future. Also, we observe that there is a social penalty involved with being labeled as a cheater: cheaters are likely to switch to more restrictive privacy settings once they are tagged and they lose more friends than fair players. Finally, we observe that the number of cheaters is not correlated with the geographical, real-world population density, or with the local popularity of the Steam Community.",
"Abstract This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"",
"We address the problem of hate speech detection in online user comments. Hate speech, defined as an \"abusive speech targeting specific group characteristics, such as ethnicity, religion, or gender\", is an important problem plaguing websites that allow users to leave feedback, having a negative impact on their online business and overall user experience. We propose to learn distributed low-dimensional representations of comments using recently proposed neural language models, that can then be fed as inputs to a classification algorithm. Our approach addresses issues of high-dimensionality and sparsity that impact the current state-of-the-art, resulting in highly efficient and effective hate speech detectors."
]
} |
1702.06877 | 2950720316 | In recent years, bullying and aggression against users on social media have grown significantly, causing serious consequences to victims of all demographics. In particular, cyberbullying affects more than half of young social media users worldwide, and has also led to teenage suicides, prompted by prolonged and or coordinated digital harassment. Nonetheless, tools and technologies for understanding and mitigating it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of cyberbullies and aggressors, and what features distinguish them from regular users. We find that bully users post less, participate in fewer online communities, and are less popular than normal users, while aggressors are quite popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, achieving over 90 AUC. | @cite_47 detect cyberbullying by decomposing it into detection of sensitive topics. They collect YouTube comments from controversial videos, use manual annotation to characterize them, and perform a bag-of-words driven text classification. @cite_42 study linguistic characteristics in cyberbullying-related content extracted from Ask.fm, aiming to detect fine-grained types of cyberbullying, such as threats and insults. Besides the victim and harasser, they also identify bystander-defenders and bystander-assistants, who support, respectively, the victim or the harasser. @cite_53 study images posted on Instagram and their associated comments to detect and distinguish between cyberaggression and cyberbullying. Finally, authors in @cite_5 present an approach for detecting bullying words in tweets, as well as demographics about bullies such as their age and gender. | {
"cite_N": [
"@cite_5",
"@cite_47",
"@cite_42",
"@cite_53"
],
"mid": [
"",
"2209227144",
"2283668614",
"1964767137"
],
"abstract": [
"",
"The scourge of cyberbullying has assumed alarming proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or as a bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. Comments or posts involving sensitive topics that are personal to an individual are more likely to be internalized by a victim, often resulting in tragic outcomes. We decompose the overall detection problem into detection of sensitive topics, lending itself into text classification sub-problems. We experiment with a corpus of 4500 YouTube comments, applying a range of binary and multiclass classifiers. We find that binary classifiers for individual labels outperform multiclass classifiers. Our findings show that the detection of textual cyberbullying can be tackled by building individual topic-sensitive classifiers.",
"The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20 to 40 of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.",
"Online gaming is a multi-billion dollar industry that entertains a large, global population. One unfortunate phenomenon, however, poisons the competition and the fun: cheating. The costs of cheating span from industry-supported expenditures to detect and limit cheating, to victims' monetary losses due to cyber crime. This paper studies cheaters in the Steam Community, an online social network built on top of the world's dominant digital game delivery platform. We collected information about more than 12 million gamers connected in a global social network, of which more than 700 thousand have their profiles flagged as cheaters. We also collected in-game interaction data of over 10 thousand players from a popular multiplayer gaming server. We show that cheaters are well embedded in the social and interaction networks: their network position is largely indistinguishable from that of fair players. We observe that the cheating behavior appears to spread through a social mechanism: the presence and the number of cheater friends of a fair player is correlated with the likelihood of her becoming a cheater in the future. Also, we observe that there is a social penalty involved with being labeled as a cheater: cheaters are likely to switch to more restrictive privacy settings once they are tagged and they lose more friends than fair players. Finally, we observe that the number of cheaters is not correlated with the geographical, real-world population density, or with the local popularity of the Steam Community."
]
} |
1702.06877 | 2950720316 | In recent years, bullying and aggression against users on social media have grown significantly, causing serious consequences to victims of all demographics. In particular, cyberbullying affects more than half of young social media users worldwide, and has also led to teenage suicides, prompted by prolonged and or coordinated digital harassment. Nonetheless, tools and technologies for understanding and mitigating it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of cyberbullies and aggressors, and what features distinguish them from regular users. We find that bully users post less, participate in fewer online communities, and are less popular than normal users, while aggressors are quite popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, achieving over 90 AUC. | Previous work often used features such as punctuation, URLs, part-of-speech, n-grams, Bag of Words (BoW), as well as lexical features relying on dictionaries of offensive words, and user-based features such as user's membership duration activity, number of friends followers, etc. Different supervised approaches have been used for detection: @cite_3 uses a regression model, whereas @cite_54 @cite_47 @cite_42 rely on other methods like Naive Bayes, Support Vector Machines (SVM), and Decision Trees (J48). By contrast, @cite_38 use a graph-based approach based on likes and comments to build bipartite graphs and identify negative behavior. A similar graph-based approach is also used in @cite_53 . | {
"cite_N": [
"@cite_38",
"@cite_54",
"@cite_42",
"@cite_53",
"@cite_3",
"@cite_47"
],
"mid": [
"2949557692",
"1823790170",
"2283668614",
"1964767137",
"",
"2209227144"
],
"abstract": [
"Cyberbullying has emerged as an important and growing social problem, wherein people use online social networks and mobile phones to bully victims with offensive text, images, audio and video on a 247 basis. This paper studies negative user behavior in the Ask.fm social network, a popular new site that has led to many cases of cyberbullying, some leading to suicidal behavior.We examine the occurrence of negative words in Ask.fms question+answer profiles along with the social network of likes of questions+answers. We also examine properties of users with cutting behavior in this social network.",
"Cyberbullying is becoming a major concern in online environments with troubling consequences. However, most of the technical studies have focused on the detection of cyberbullying through identifying harassing comments rather than preventing the incidents by detecting the bullies. In this work we study the automatic detection of bully users on YouTube. We compare three types of automatic detection: an expert system, supervised machine learning models, and a hybrid type combining the two. All these systems assign a score indicating the level of “bulliness” of online bullies. We demonstrate that the expert system outperforms the machine learning models. The hybrid classifier shows an even better performance.",
"The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20 to 40 of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories.",
"Online gaming is a multi-billion dollar industry that entertains a large, global population. One unfortunate phenomenon, however, poisons the competition and the fun: cheating. The costs of cheating span from industry-supported expenditures to detect and limit cheating, to victims' monetary losses due to cyber crime. This paper studies cheaters in the Steam Community, an online social network built on top of the world's dominant digital game delivery platform. We collected information about more than 12 million gamers connected in a global social network, of which more than 700 thousand have their profiles flagged as cheaters. We also collected in-game interaction data of over 10 thousand players from a popular multiplayer gaming server. We show that cheaters are well embedded in the social and interaction networks: their network position is largely indistinguishable from that of fair players. We observe that the cheating behavior appears to spread through a social mechanism: the presence and the number of cheater friends of a fair player is correlated with the likelihood of her becoming a cheater in the future. Also, we observe that there is a social penalty involved with being labeled as a cheater: cheaters are likely to switch to more restrictive privacy settings once they are tagged and they lose more friends than fair players. Finally, we observe that the number of cheaters is not correlated with the geographical, real-world population density, or with the local popularity of the Steam Community.",
"",
"The scourge of cyberbullying has assumed alarming proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or as a bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. Comments or posts involving sensitive topics that are personal to an individual are more likely to be internalized by a victim, often resulting in tragic outcomes. We decompose the overall detection problem into detection of sensitive topics, lending itself into text classification sub-problems. We experiment with a corpus of 4500 YouTube comments, applying a range of binary and multiclass classifiers. We find that binary classifiers for individual labels outperform multiclass classifiers. Our findings show that the detection of textual cyberbullying can be tackled by building individual topic-sensitive classifiers."
]
} |
1702.06877 | 2950720316 | In recent years, bullying and aggression against users on social media have grown significantly, causing serious consequences to victims of all demographics. In particular, cyberbullying affects more than half of young social media users worldwide, and has also led to teenage suicides, prompted by prolonged and or coordinated digital harassment. Nonetheless, tools and technologies for understanding and mitigating it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of cyberbullies and aggressors, and what features distinguish them from regular users. We find that bully users post less, participate in fewer online communities, and are less popular than normal users, while aggressors are quite popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, achieving over 90 AUC. | Sentiment analysis of text can also contribute useful features in detecting offensive or abusive content. For instance, @cite_18 use sentiment scores of data collected from Kongregate (online gaming site), Slashdot, and MySpace. They use a probabilistic sentiment analysis approach to distinguish between bullies and non-bullies, and rank the most influential users based on a predator-victim graph built from exchanged messages. @cite_10 rely on sentiment to identify victims on Twitter who pose high risk to themselves or others. Apart from using positive and negative sentiments, they consider specific emotions such as anger, embarrassment, and sadness. Finally, Patch @cite_43 studies the presence of such emotions (anger, sadness, fear) in bullying instances on Twitter. | {
"cite_N": [
"@cite_43",
"@cite_18",
"@cite_10"
],
"mid": [
"2182205285",
"2229588878",
"2102892431"
],
"abstract": [
"",
"The rapid growth of social networking and gaming sites is associated with an increase of online bullying activities which, in the worst scenario, result in suicidal attempts by the victims. In this paper, we propose an effective technique to detect and rank the most influential persons (predators and victims). It simplifies the network communication problem through a proposed detection graph model. The experimental results indicate that this technique is highly accurate.",
"Bullying is a serious national health issue among adolescents. Social media offers a new opportunity to study bullying in both physical and cyber worlds. Sentiment analysis has the potential to identify victims who pose high risk to themselves or others, and to enhance the scientific understanding of bullying overall. We identify seven emotions common in bullying. While some of the emotions are well-studied before, others are non-standard in the sentiment analysis literature. We propose a fast training procedure to recognize these emotions without explicitly producing a conventional labeled training dataset. We apply our procedure to social media posts on bullying and discuss our findings."
]
} |
1702.06822 | 2590642499 | We investigate dfferent mean-field-like approximations for stochastic dynamics on graphs, within the framework of a cluster-variational approach. In analogy with its equilibrium counterpart, this approach allows one to give a unified view of various (previously known) approximation schemes, and suggests quite a systematic way to improve the level of accuracy. We compare the different approximations with Monte Carlo simulations on a reversible (susceptible-infected-susceptible) discrete-time epidemic-spreading model on random graphs. | Another special case of our reference model, which nonetheless includes several models of physical relevance (such as the kinetic Ising model, or the ordinary Voter model), is that in which the elementary transition probability for a given site @math (namely, @math ) does not depend on the current-time configuration of the site itself @math (so that we can denote it as @math ). In other words, the next-time configuration of each site is conditioned only by the current-time configurations of its neighbors. For this reason, we shall denote this case as a purely dynamics. As mentioned in the introduction, the and approximations of reference @cite_1 refer to this kind of dynamics. Actually, it turns out that, for a generic purely-neighbor-conditioned dynamics, the PQ and P approximations degenerate into each other, and both coincide with the star approximation of reference @cite_1 . Similarly, the M and PQR approximations degenerate into each other, and coincide with the diamond approximation. All these connections are detailed in and . As a further consequence, we can state that in this case the diamond approximation is itself an instance of Kikuchi's path probability method. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2019228304"
],
"abstract": [
"We introduce a new variational approach to the stationary state of kinetic Ising-like models. The approach is based on the cluster expansion of the entropy term appearing in a functional which is minimized by the system history. We rederive a known mean-field theory and propose a new method, here called diamond approximation, which turns out to be more accurate and faster than other methods of comparable computational complexity. Copyright EDP Sciences, SIF, Springer-Verlag Berlin Heidelberg 2013"
]
} |
1702.07124 | 2591627172 | We present a novel proof-of-concept attack named Trojan of Things (ToT), which aims to attack NFC- enabled mobile devices such as smartphones. The key idea of ToT attacks is to covertly embed maliciously programmed NFC tags into common objects routinely encountered in daily life such as banknotes, clothing, or furniture, which are not considered as NFC touchpoints. To fully explore the threat of ToT, we develop two striking techniques named ToT device and Phantom touch generator. These techniques enable an attacker to carry out various severe and sophisticated attacks unbeknownst to the device owner who unintentionally puts the device close to a ToT. We discuss the feasibility of the attack as well as the possible countermeasures against the threats of ToT attacks. | There have been many studies on the side-channel attacks on touchscreens (LCDs); @cite_20 used smudge left on the screen to infer a graphical password, @cite_1 used the data collected from a surveillance camera to recognize keystrokes of a victim, and @cite_10 used electromagnetic emanation to reconstruct a victim's tablet display. To the best of our knowledge, while these attacks passively steal data from the touchscreen, our Phantom touch generator is the first attack that actively radiates signals toward touchscreen to cause targeted malfunctions. | {
"cite_N": [
"@cite_10",
"@cite_1",
"@cite_20"
],
"mid": [
"2074960873",
"2038093357",
"1626992774"
],
"abstract": [
"The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attache case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.",
"The pervasiveness of mobile devices increases the risk of exposing sensitive information on the go. In this paper, we arise this concern by presenting an automatic attack against modern touchscreen keyboards. We demonstrate the attack against the Apple iPhone — 2010's most popular touchscreen device — although it can be adapted to other devices (e.g., Android) that employ similar key-magnifying keyboards. Our attack processes the stream of frames from a video camera (e.g., surveillance or portable camera) and recognizes keystrokes online, in a fraction of the time needed to perform the same task by direct observation or offline analysis of a recorded video, which can be unfeasible for large amount of data. Our attack detects, tracks, and rectifies the target touchscreen, thus following the device or camera's movements and eliminating possible perspective distortions and rotations In real-world settings, our attack can automatically recognize up to 97.07 percent of the keystrokes (91.03 on average), with 1.15 percent of errors (3.16 on average) at a speed ranging from 37 to 51 keystrokes per minute.",
"Touch screens are an increasingly common feature on personal computing devices, especially smartphones, where size and user interface advantages accrue from consolidating multiple hardware components (keyboard, number pad, etc.) into a single software definable user interface. Oily residues, or smudges, on the touch screen surface, are one side effect of touches from which frequently used patterns such as a graphical password might be inferred. In this paper we examine the feasibility of such smudge attacks on touch screens for smartphones, and focus our analysis on the Android password pattern. We first investigate the conditions (e.g., lighting and camera orientation) under which smudges are easily extracted. In the vast majority of settings, partial or complete patterns are easily retrieved. We also emulate usage situations that interfere with pattern identification, and show that pattern smudges continue to be recognizable. Finally, we provide a preliminary analysis of applying the information learned in a smudge attack to guessing an Android password pattern."
]
} |
1702.06728 | 2589759987 | Inspired by the recent advances of image super-resolution using convolutional neural network (CNN), we propose a CNN-based block up-sampling scheme for intra frame coding. A block can be down-sampled before being compressed by normal intra coding, and then up-sampled to its original resolution. Different from previous studies on down up-sampling-based coding, the up-sampling methods in our scheme have been designed by training CNN instead of hand-crafted. We explore a new CNN structure for up-sampling, which features deconvolution of feature maps, multi-scale fusion, and residue learning, making the network both compact and efficient. We also design different networks for the up-sampling of luma and chroma components, respectively, where the chroma up-sampling CNN utilizes the luma information to boost its performance. In addition, we design a two-stage up-sampling process, the first stage being within the block-by-block coding loop, and the second stage being performed on the entire frame, so as to refine block boundaries. We also empirically study how to set the coding parameters of down-sampled blocks for pursuing the frame-level rate-distortion optimization. Our proposed scheme is implemented into the high-efficiency video coding (HEVC) reference software, and a comprehensive set of experiments have been performed to evaluate our methods. Experimental results show that our scheme achieves significant bits saving compared with the HEVC anchor, especially at low bit rates, leading to on average 5.5 BD-rate reduction on common test sequences and on average 9.0 BD-rate reduction on ultrahigh definition test sequences. | Down-sampling before encoding and up-sampling after decoding is a well known strategy for image and video coding in scenarios where the transmission bandwidth is limited. Many researches on this topic have been focused on developing efficient up-sampling methods. For example, the down up-sampling-based video coding scheme in @cite_5 adopts the video SR method proposed in @cite_2 , which is specifically designed for compressed videos by incorporating information like motion vectors into the SR task using a Bayesian framework. The scheme proposed by Shen @cite_3 adopts another up-sampling method, which belongs to learning-based SR methods, and imposes constraints of nearest neighbor searching region and rectifies the unreal'' pixels using inter-resolution and inter-frame correlations. Another scheme proposed by Barreto @cite_36 takes into account the locally variant image characteristics, and performs region-based SR to improve the reconstruction quality. The segmentation of regions is performed at the encoder side, and the segmentation map is signaled as side information to the decoder to guide the SR process. | {
"cite_N": [
"@cite_36",
"@cite_5",
"@cite_3",
"@cite_2"
],
"mid": [
"1973777106",
"2034564742",
"2161112041",
"2110548691"
],
"abstract": [
"Every user of multimedia technology expects good image and video visual quality independently of the particular characteristics of the receiver or the communication networks employed. Unfortunately, due to factors like processing power limitations and channel capabilities, images or video sequences are often downsampled and or transmitted or stored at low bitrates, resulting in a degradation of their final visual quality. In this paper, we propose a region-based framework for intentionally introducing downsampling of the high resolution (HR) image sequences before compression and then utilizing super resolution (SR) techniques for generating an HR video sequence at the decoder. Segmentation is performed at the encoder on groups of images to classify their blocks into three different types according to their motion and texture. The obtained segmentation is used to define the downsampling process at the encoder and it is encoded and provided to the decoder as side information in order to guide the SR process. All the components of the proposed framework are analyzed in detail. A particular implementation of it is described and tested experimentally. The experimental results validate the usefulness of the proposed method.",
"The term super-resolution is typically used in the literature to describe the process of obtaining a high resolution (HR) image or a sequence of HR images from a set of low resolution (LR) observations. This term has been applied primarily to spatial and temporal resolution enhancement. However, intentional pre-processing and downsampling can be applied during encoding and super-resolution techniques to upsample the image can be applied during decoding when video compression is the main objective. In this paper we consider the following three video compression models. The first one simply compresses the sequence using any of the available standard compression methods, the second one pre-processes (without downsampling) the image sequence before compression, so that post-processing (without upsampling) is applied to the compressed sequence. The third model includes downsampling in the pre-processing stage and the application of a super resolution technique during decoding. In this paper we describe these three models but concentrate on the application of super-resolution techniques as a way to post-process and upsample a compressed video sequences. Experimental results are provided on a wide range of bitrates for two very important applications: format conversion between different platforms and scalable video coding.",
"It has been reported that oversampling a still image before compression does not guarantee a good image quality. Similarly, down-sampling before video compression in low bit rate video coding may alleviate the blocking effect and improve peak signal-to-noise ratio of the decoded frames. When the number of discrete cosine transform coefficients is reduced in such a down-sampling based coding (DBC), the bit budget of each coefficient will increase, thus reduce the quantization error. A DBC video coding scheme is proposed in this paper, where a super-resolution technique is employed to restore the down-sampled frames to their original resolutions. The performance improvement of the proposed DBC scheme is analyzed at low bit rates, and verified by experiments.",
"Super-resolution algorithms recover high-frequency information from a sequence of low-resolution observations. In this paper, we consider the impact of video compression on the super-resolution task. Hybrid motion-compensation and transform coding schemes are the focus, as these methods provide observations of the underlying displacement values as well as a variable noise process. We utilize the Bayesian framework to incorporate this information and fuse the super-resolution and post-processing problems. A tractable solution is defined, and relationships between algorithm parameters and information in the compressed bitstream are established. The association between resolution recovery and compression ratio is also explored. Simulations illustrate the performance of the procedure with both synthetic and nonsynthetic sequences."
]
} |
1702.06728 | 2589759987 | Inspired by the recent advances of image super-resolution using convolutional neural network (CNN), we propose a CNN-based block up-sampling scheme for intra frame coding. A block can be down-sampled before being compressed by normal intra coding, and then up-sampled to its original resolution. Different from previous studies on down up-sampling-based coding, the up-sampling methods in our scheme have been designed by training CNN instead of hand-crafted. We explore a new CNN structure for up-sampling, which features deconvolution of feature maps, multi-scale fusion, and residue learning, making the network both compact and efficient. We also design different networks for the up-sampling of luma and chroma components, respectively, where the chroma up-sampling CNN utilizes the luma information to boost its performance. In addition, we design a two-stage up-sampling process, the first stage being within the block-by-block coding loop, and the second stage being performed on the entire frame, so as to refine block boundaries. We also empirically study how to set the coding parameters of down-sampled blocks for pursuing the frame-level rate-distortion optimization. Our proposed scheme is implemented into the high-efficiency video coding (HEVC) reference software, and a comprehensive set of experiments have been performed to evaluate our methods. Experimental results show that our scheme achieves significant bits saving compared with the HEVC anchor, especially at low bit rates, leading to on average 5.5 BD-rate reduction on common test sequences and on average 9.0 BD-rate reduction on ultrahigh definition test sequences. | The abovementioned researches all perform down-sampling of the entire image frame. However, it is noted that a uniform down-sampling rate cannot suit for all the different image regions that have variant features. Locally adaptive down-sampling rates are then proposed. In @cite_38 , the appropriate down-sampling rates have been derived through theoretical analyses. In @cite_30 , compliant with block-based coding, down-sampling rates are made adaptive for each block and selected from @math , @math , @math , and @math . | {
"cite_N": [
"@cite_30",
"@cite_38"
],
"mid": [
"2152818797",
"2100460507"
],
"abstract": [
"At low bit rates, better coding quality can be achieved by downsampling the image prior to compression and estimating the missing portion after decompression. This paper presents a new algorithm in such a paradigm, based on the adaptive decision of appropriate downsampling directions ratios and quantization steps, in order to achieve higher coding quality with low bit rates with the consideration of local visual significance. The full-resolution image can be restored from the DCT coefficients of the downsampled pixels so that the spatial interpolation required otherwise is avoided. The proposed algorithm significantly raises the critical bit rate to approximately 1.2 bpp, from 0.15-0.41 bpp in the existing downsample-prior-to-JPEG schemes and, therefore, outperforms the standard JPEG method in a much wider bit-rate scope. The experiments have demonstrated better PSNR improvement over the existing techniques before the critical bit rate. In addition, the adaptive mode decision not only makes the critical bit rate less image-independent, but also automates the switching coders in variable bit-rate applications, since the algorithm turns to the standard JPEG method whenever it is necessary at higher bit rates",
"To transmit video contents over limited bandwidth network, video bitstreams may need to reduce the bit rate by encoding with coarse quantization parameters at the expense of degrading quality. At low bit rates, better coding quality can be achieved by downsampling the video prior to compression and upsampling later after decompression. In this paper, we present an adaptive downsampling upsampling video coding scheme in order to achieve better video quality at low bit rates in terms of both measure and visual quality. In particular, appropriate downsampling directions ratios and quantization step sizes are adaptively decided for encoding different regions of video frame with the consideration of local contents. Experimental results have shown the better performance of the proposed scheme over the regular coding and downsampling-based coding scheme with fixed downscaling ratio. In addition, the proposed scheme significantly raises the critical bit rate below which a downsampling-based coding scheme outperforms the regular coding."
]
} |
1702.06728 | 2589759987 | Inspired by the recent advances of image super-resolution using convolutional neural network (CNN), we propose a CNN-based block up-sampling scheme for intra frame coding. A block can be down-sampled before being compressed by normal intra coding, and then up-sampled to its original resolution. Different from previous studies on down up-sampling-based coding, the up-sampling methods in our scheme have been designed by training CNN instead of hand-crafted. We explore a new CNN structure for up-sampling, which features deconvolution of feature maps, multi-scale fusion, and residue learning, making the network both compact and efficient. We also design different networks for the up-sampling of luma and chroma components, respectively, where the chroma up-sampling CNN utilizes the luma information to boost its performance. In addition, we design a two-stage up-sampling process, the first stage being within the block-by-block coding loop, and the second stage being performed on the entire frame, so as to refine block boundaries. We also empirically study how to set the coding parameters of down-sampled blocks for pursuing the frame-level rate-distortion optimization. Our proposed scheme is implemented into the high-efficiency video coding (HEVC) reference software, and a comprehensive set of experiments have been performed to evaluate our methods. Experimental results show that our scheme achieves significant bits saving compared with the HEVC anchor, especially at low bit rates, leading to on average 5.5 BD-rate reduction on common test sequences and on average 9.0 BD-rate reduction on ultrahigh definition test sequences. | Super-resolution or resolution enhancement aims at reconstructing high-resolution (HR) signal from low-resolution (LR) observation, which has been studied extensively in the literature. Existing image SR methods can be categorized into interpolation-based, reconstruction-based, and learning-based ones @cite_1 . Recently, inspired by the success of deep learning, researchers have put more attention to learning-based SR using CNN. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2121058967"
],
"abstract": [
"This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework."
]
} |
1702.06728 | 2589759987 | Inspired by the recent advances of image super-resolution using convolutional neural network (CNN), we propose a CNN-based block up-sampling scheme for intra frame coding. A block can be down-sampled before being compressed by normal intra coding, and then up-sampled to its original resolution. Different from previous studies on down up-sampling-based coding, the up-sampling methods in our scheme have been designed by training CNN instead of hand-crafted. We explore a new CNN structure for up-sampling, which features deconvolution of feature maps, multi-scale fusion, and residue learning, making the network both compact and efficient. We also design different networks for the up-sampling of luma and chroma components, respectively, where the chroma up-sampling CNN utilizes the luma information to boost its performance. In addition, we design a two-stage up-sampling process, the first stage being within the block-by-block coding loop, and the second stage being performed on the entire frame, so as to refine block boundaries. We also empirically study how to set the coding parameters of down-sampled blocks for pursuing the frame-level rate-distortion optimization. Our proposed scheme is implemented into the high-efficiency video coding (HEVC) reference software, and a comprehensive set of experiments have been performed to evaluate our methods. Experimental results show that our scheme achieves significant bits saving compared with the HEVC anchor, especially at low bit rates, leading to on average 5.5 BD-rate reduction on common test sequences and on average 9.0 BD-rate reduction on ultrahigh definition test sequences. | Dong first proposed a CNN-based method for single image SR, termed SRCNN @cite_34 , which has a simple network structure but demonstrated excellent performance. Later on, several researches have been conducted to improve upon SRCNN at several aspects. First, deeper networks have been explored to enhance the performance, such as the very deep network known as VDSR @cite_0 . Second, it is observed that the training of SRCNN converges too slowly, and residue learning @cite_12 , i.e. learning the difference between LR and HR images rather than directly learning the HR images, is adopted to accelerate the training and also improves the reconstruction quality @cite_0 . Third, the input to SRCNN is an interpolated version of LR image, which is to be enhanced by the network. The fixed interpolation filters before the network may not be optimal. Thus, an end-to-end learning strategy, i.e. directly learning from the LR to the HR with embedding the resolution change into the network, is observed to perform better @cite_11 . | {
"cite_N": [
"@cite_0",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"2951997238",
"54257720",
"2949650786",
"2505593925"
],
"abstract": [
"We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.",
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"One impressive advantage of convolutional neural networks (CNNs) is their ability to automatically learn feature representation from raw pixels, eliminating the need for hand-designed procedures. However, recent methods for single image super-resolution (SR) fail to maintain this advantage. They utilize CNNs in two decoupled steps, i.e., first upsampling the low resolution (LR) image to the high resolution (HR) size with hand-designed techniques (e.g., bicubic interpolation), and then applying CNNs on the upsampled LR image to reconstruct HR results. In this paper, we seek an alternative and propose a new image SR method, which jointly learns the feature extraction, upsampling and HR reconstruction modules, yielding a completely end-to-end trainable deep CNN. As opposed to existing approaches, the proposed method conducts upsampling in the latent feature space with filters that are optimized for the task of image SR. In addition, the HR reconstruction is performed in a multi-scale manner to simultaneously incorporate both short- and long-range contextual information, ensuring more accurate restoration of HR images. To facilitate network training, a new training approach is designed, which jointly trains the proposed deep network with a relatively shallow network, leading to faster convergence and more superior performance. The proposed method is extensively evaluated on widely adopted data sets and improves the performance of state-of-the-art methods with a considerable margin. Moreover, in-depth ablation studies are conducted to verify the contribution of different network designs to image SR, providing additional insights for future research."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.