aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1503.06450 | 2952649191 | Open domain relation extraction systems identify relation and argument phrases in a sentence without relying on any underlying schema. However, current state-of-the-art relation extraction systems are available only for English because of their heavy reliance on linguistic tools such as part-of-speech taggers and dependency parsers. We present a cross-lingual annotation projection method for language independent relation extraction. We evaluate our method on a manually annotated test set and present results on three typologically different languages. We release these manual annotations and extracted relations in 61 languages from Wikipedia. | Cross-lingual projection has been used for transfer of syntactic @cite_1 @cite_0 and semantic information @cite_3 @cite_11 . There has been a growing interest in RE for languages other than English. present a dependency-parser based open RE system for Spanish, Portuguese and Galician. RE systems for Korean have been developed for both open-domain @cite_18 and closed-domain @cite_17 @cite_2 using annotation projection. These approaches use a Korean-English parallel corpus to project relations extracted in English to Korean. Following projection, a Korean POS-tagger and a dependency parser are employed to learn a RE system for Korean. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_17",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_11"
],
"mid": [
"2168419780",
"2079442239",
"2153199128",
"2172167844",
"2143954309",
"",
"2115057736"
],
"abstract": [
"Open information extraction (IE) is a weakly supervised IE paradigm that aims to extract relation-independent information from large-scale natural language documents without significant annotation efforts. A key challenge for Open IE is to achieve self-supervision, in which the training examples are automatically obtained. Although the feasibility of Open IE systems has been demonstrated for English, utilizing such techniques to build the systems for other languages is problematic because previous self-supervision approaches require language-specific knowledge. To improve the cross-language portability of Open IE systems, this paper presents a self-supervision approach that exploits parallel corpora to obtain training examples for the target language by projecting the annotations onto the source language. The merit of our method is demonstrated using a Korean Open IE system developed without any language-specific knowledge.",
"This paper investigates the potential for projecting linguistic annotations including part-of-speech tags and base noun phrase bracketings from one language to another via automatically word-aligned parallel corpora. First, experiments assess the accuracy of unmodified direct transfer of tags and brackets from the source language English to the target languages French and Chinese, both for noisy machine-aligned sentences and for clean hand-aligned sentences. Performance is then substantially boosted over both of these baselines by using training techniques optimized for very noisy data, yielding 94-96 core French part-of-speech tag accuracy and 90 French bracketing F-measure for stand-alone monolingual tools trained without the need for any human-annotated data in the given language.",
"Although researchers have conducted extensive studies on relation extraction in the last decade, supervised approaches are still limited because they require large amounts of training data to achieve high performances. To build a relation extractor without significant annotation effort, we can exploit cross-lingual annotation projection, which leverages parallel corpora as external resources for supervision. This paper proposes a novel graph-based projection approach and demonstrates the merits of it by using a Korean relation extraction system based on projected dataset from an English-Korean parallel corpus.",
"Information extraction (IE) systems are costly to build because they require development texts, parsing tools, and specialized dictionaries for each application domain and each natural language that needs to be processed. We present a novel method for rapidly creating IE systems for new languages by exploiting existing IE systems via cross-language projection. Given an IE system for a source language (e.g., English), we can transfer its annotations to corresponding texts in a target language (e.g., French) and learn information extraction rules for the new language automatically. In this paper, we explore several ways of realizing both the transfer and learning processes using off-the-shelf machine translation systems, induced word alignment, attribute projection, and transformation-based learning. We present a variety of experiments that show how an English IE system for a plane crash domain can be leveraged to automatically create a French IE system for the same domain.",
"Broad coverage, high quality parsers are available for only a handful of languages. A prerequisite for developing broad coverage parsers for more languages is the annotation of text with the desired linguistic representations (also known as “treebanking”). However, syntactic annotation is a labor intensive and time-consuming process, and it is difficult to find linguistically annotated text in sufficient quantities. In this article, we explore using parallel text to help solving the problem of creating syntactic annotation in more languages. The central idea is to annotate the English side of a parallel corpus, project the analysis to the second language, and then train a stochastic analyzer on the resulting noisy annotations. We discuss our background assumptions, describe an initial study on the “projectability” of syntactic relations, and then present two experiments in which stochastic parsers are developed with minimal human intervention via projection from English.",
"",
"This article considers the task of automatically inducing role-semantic annotations in the FrameNet paradigm for new languages. We propose a general framework that is based on annotation projection, phrased as a graph optimization problem. It is relatively inexpensive and has the potential to reduce the human effort involved in creating role-semantic resources. Within this framework, we present projection models that exploit lexical and syntactic information. We provide an experimental evaluation on an English-German parallel corpus which demonstrates the feasibility of inducing high-precision German semantic role annotation both for manually and automatically annotated English data."
]
} |
1503.06499 | 2949482333 | With the advent of GPS enabled smartphones, an increasing number of users is actively sharing their location through a variety of applications and services. Along with the continuing growth of Location-Based Social Networks (LBSNs), security experts have increasingly warned the public of the dangers of exposing sensitive information such as personal location data. Most importantly, in addition to the geographical coordinates of the user's location, LBSNs allow easy access to an additional set of characteristics of that location, such as the venue type or popularity. In this paper, we investigate the role of location semantics in the identification of LBSN users. We simulate a scenario in which the attacker's goal is to reveal the identity of a set of LBSN users by observing their check-in activity. We then propose to answer the following question: what are the types of venues that a malicious user has to monitor to maximize the probability of success? Conversely, when should a user decide whether to make his her check-in to a location public or not? We perform our study on more than 1 million check-ins distributed over 17 urban regions of the United States. Our analysis shows that different types of venues display different discriminative power in terms of user identity, with most of the venues in the "Residence" category providing the highest re-identification success across the urban regions. Interestingly, we also find that users with a high entropy of their check-ins distribution are not necessarily the hardest to identify, suggesting that it is the collective behaviour of the users' population that determines the complexity of the identification task, rather than the individual behaviour. | The field of location privacy has been a very active area of research in recent years. The importance of protecting information concerning a person's home location is highlighted for example in @cite_13 , where the authors show how data on the home work pair can be used to carry out inference attacks to reveal the identity of a user from an anonymized GPS trace. On the other hand, Krumm @cite_11 studies the inverse problem and shows that it is possible to infer the home location of a user participating in a database of GPS traces. More recently, de have measured the privacy of users making or receiving mobile phone calls or text messages @cite_8 . They find that very few spatio-temporal points from a location trace are needed to uniquely identify the entire trace and thus the individual. | {
"cite_N": [
"@cite_8",
"@cite_13",
"@cite_11"
],
"mid": [
"2115240023",
"1536564267",
"2141854027"
],
"abstract": [
"We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals.",
"Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.",
"Although the privacy threats and countermeasures associated with location data are well known, there has not been a thorough experiment to assess the effectiveness of either. We examine location data gathered from volunteer subjects to quantify how well four different algorithms can identify the subjects' home locations and then their identities using a freely available, programmable Web search engine. Our procedure can identify at least a small fraction of the subjects and a larger fraction of their home addresses. We then apply three different obscuration countermeasures designed to foil the privacy attacks: spatial cloaking, noise, and rounding. We show how much obscuration is necessary to maintain the privacy of all the subjects."
]
} |
1503.06499 | 2949482333 | With the advent of GPS enabled smartphones, an increasing number of users is actively sharing their location through a variety of applications and services. Along with the continuing growth of Location-Based Social Networks (LBSNs), security experts have increasingly warned the public of the dangers of exposing sensitive information such as personal location data. Most importantly, in addition to the geographical coordinates of the user's location, LBSNs allow easy access to an additional set of characteristics of that location, such as the venue type or popularity. In this paper, we investigate the role of location semantics in the identification of LBSN users. We simulate a scenario in which the attacker's goal is to reveal the identity of a set of LBSN users by observing their check-in activity. We then propose to answer the following question: what are the types of venues that a malicious user has to monitor to maximize the probability of success? Conversely, when should a user decide whether to make his her check-in to a location public or not? We perform our study on more than 1 million check-ins distributed over 17 urban regions of the United States. Our analysis shows that different types of venues display different discriminative power in terms of user identity, with most of the venues in the "Residence" category providing the highest re-identification success across the urban regions. Interestingly, we also find that users with a high entropy of their check-ins distribution are not necessarily the hardest to identify, suggesting that it is the collective behaviour of the users' population that determines the complexity of the identification task, rather than the individual behaviour. | With respect to traditional Location-Based Services (LBS), the additional social dimension of LBSNs works as an incentive for people to share their location data on the social network. use LBSN data to study the spatio-temporal patterns of users activity @cite_3 and build a model human urban mobility in an attempt to predict the next visited location @cite_14 . collect a dataset of Foursquare check-ins over the cities of Cardiff and Cambridge in UK, and measure the regularity and predictability of users' check-ins @cite_22 . They find that check-ins are more regular at Home" and Work" venues, as opposed to Outdoors" venues, where check-ins are less predictable. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_3"
],
"mid": [
"",
"2061731042",
"7143572"
],
"abstract": [
"",
"Location-sharing services such as Foursquare provide a rich source of information about the visits of users to locations. In the case of Foursquare, users voluntarily ‘check in’ to places they visit using a mobile application. An analysis of these data may reveal differences in users personality in terms of their mobility habits, preferred places, and action and location patterns. This knowledge about user behaviour can be used, in addition to information about their preferences, to improve current recommendation systems for mobile platforms.",
"We present a large-scale study of user behavior in Foursquare, conducted on a dataset of about 700 thousand users that spans a period of more than 100 days. We analyze user checkin dynamics, demonstrating how it reveals meaningful spatio-temporal patterns and offers the opportunity to study both user mobility and urban spaces. Our aim is to inform on how scientific researchers could utilise data generated in Location-based Social Networks to attain a deeper understanding of human mobility and how developers may take advantage of such systems to enhance applications such as recommender systems."
]
} |
1503.06497 | 2219115269 | The popularity of online social media platforms provides an unprecedented opportunity to study real-world complex networks of interactions. However, releasing this data to researchers and the public comes at the cost of potentially exposing private and sensitive user information. It has been shown that a naive anonymization of a network by removing the identity of the nodes is not sufficient to preserve users' privacy. In order to deal with malicious attacks, k-anonymity solutions have been proposed to partially obfuscate topological information that can be used to infer nodes' identity. In this paper, we study the problem of ensuring k-anonymity in time-varying graphs, i.e., graphs with a structure that changes over time, and multi-layer graphs, i.e., graphs with multiple types of links. More specifically, we examine the case in which the attacker has access to the degree of the nodes. The goal is to generate a new graph where, given the degree of a node in each (temporal) layer of the graph, such a node remains indistinguishable from other k-1 nodes in the graph. In order to achieve this, we find the optimal partitioning of the graph nodes such that the cost of anonymizing the degree information within each group is minimum. We show that this reduces to a special case of a Generalized Assignment Problem, and we propose a simple yet effective algorithm to solve it. Finally, we introduce an iterated linear programming approach to enforce the realizability of the anonymized degree sequences. The efficacy of the method is assessed through an extensive set of experiments on synthetic and real-world graphs. | The concept of @math -anonymity in the graph domain @cite_29 @cite_19 was introduced by , but it is only with Liu and Terzi @cite_4 that a first algorithm to construct a @math -anonymous graph is proposed. As their algorithm is designed to work on static graphs, however, if applied on the temporal slices of a time-varying graph it fails to take into account the additional information contained in the temporal dimension, i.e., the size of the anonymity groups in the temporal graph will be lower than that of the individual slices. Moreover, their technique generally requires repeated anonymizations of the graph under increasing levels of structural noise, something that is not computationally feasible when dealing with large time-varying graphs. A number of successive works proposed heuristics to reduce the total running time, thus making it feasible to anonymize large static social networks @cite_32 @cite_31 @cite_27 . | {
"cite_N": [
"@cite_4",
"@cite_29",
"@cite_32",
"@cite_19",
"@cite_27",
"@cite_31"
],
"mid": [
"1998091733",
"2119404697",
"5087905",
"",
"2952951121",
"2079289205"
],
"abstract": [
"The proliferation of network data in various application domains has raised privacy concerns for the individuals involved. Recent studies show that simply removing the identities of the nodes before publishing the graph social network data does not guarantee privacy. The structure of the graph itself, and in its basic form the degree of the nodes, can be revealing the identities of individuals. To address this issue, we study a specific graph-anonymization problem. We call a graph k-degree anonymous if for every node v, there exist at least k-1 other nodes in the graph with the same degree as v. This definition of anonymity prevents the re-identification of individuals by adversaries with a priori knowledge of the degree of certain nodes. We formally define the graph-anonymization problem that, given a graph G, asks for the k-degree anonymous graph that stems from G with the minimum number of graph-modification operations. We devise simple and efficient algorithms for solving this problem. Our algorithms are based on principles related to the realizability of degree sequences. We apply our methods to a large spectrum of synthetic and real datasets and demonstrate their efficiency and practical utility.",
"Advances in technology have made it possible to collect data about individuals and the connections between them, such as email correspondence and friendships. Agencies and researchers who have collected such social network data often have a compelling interest in allowing others to analyze the data. However, in many cases the data describes relationships that are private (e.g., email correspondence) and sharing the data in full can result in unacceptable disclosures. In this paper, we present a framework for assessing the privacy risk of sharing anonymized network data. This includes a model of adversary knowledge, for which we consider several variants and make connections to known graph theoretical results. On several real-world social networks, we show that simple anonymization techniques are inadequate, resulting in substantial breaches of privacy for even modestly informed adversaries. We propose a novel anonymization technique based on perturbing the network and demonstrate empirically that it leads to substantial reduction of the privacy threat. We also analyze the eect that anonymizing the network has on the utility of the data for social network analysis.",
"Liu and Terzi proposed the notion of k-degree anonymity to address the problem of identity anonymization in graphs. A graph is k-degree anonymous if and only if each of its vertices has the same degree as that of, at least, k-1 other vertices. The anonymization problem is to transform a non-k-degree anonymous graph into a k-degree anonymous graph by adding or deleting a minimum number of edges.",
"",
"Motivated by a strongly growing interest in anonymizing social network data, we investigate the NP-hard Degree Anonymization problem: given an undirected graph, the task is to add a minimum number of edges such that the graph becomes k-anonymous. That is, for each vertex there have to be at least k-1 other vertices of exactly the same degree. The model of degree anonymization has been introduced by Liu and Terzi [ACM SIGMOD'08], who also proposed and evaluated a two-phase heuristic. We present an enhancement of this heuristic, including new algorithms for each phase which significantly improve on the previously known theoretical and practical running times. Moreover, our algorithms are optimized for large-scale social networks and provide upper and lower bounds for the optimal solution. Notably, on about 26 of the real-world data we provide (provably) optimal solutions; whereas in the other cases our upper bounds significantly improve on known heuristic solutions.",
"In this paper, we consider the problem of anonymization on large networks. There are some anonymization methods for networks, but most of them can not be applied on large networks because of their complexity. We present an algorithm for k-degree anonymity on large networks. Given a network G, we construct a k-degree anonymous network, G, by the minimum number of edge modifications. We devise a simple and efficient algorithm for solving this problem on large networks. Our algorithm uses univariate micro-aggregation to anonymize the degree sequence, and then it modifies the graph structure to meet the k-degree anonymous sequence. We apply our algorithm to a different large real datasets and demonstrate their efficiency and practical utility."
]
} |
1503.06497 | 2219115269 | The popularity of online social media platforms provides an unprecedented opportunity to study real-world complex networks of interactions. However, releasing this data to researchers and the public comes at the cost of potentially exposing private and sensitive user information. It has been shown that a naive anonymization of a network by removing the identity of the nodes is not sufficient to preserve users' privacy. In order to deal with malicious attacks, k-anonymity solutions have been proposed to partially obfuscate topological information that can be used to infer nodes' identity. In this paper, we study the problem of ensuring k-anonymity in time-varying graphs, i.e., graphs with a structure that changes over time, and multi-layer graphs, i.e., graphs with multiple types of links. More specifically, we examine the case in which the attacker has access to the degree of the nodes. The goal is to generate a new graph where, given the degree of a node in each (temporal) layer of the graph, such a node remains indistinguishable from other k-1 nodes in the graph. In order to achieve this, we find the optimal partitioning of the graph nodes such that the cost of anonymizing the degree information within each group is minimum. We show that this reduces to a special case of a Generalized Assignment Problem, and we propose a simple yet effective algorithm to solve it. Finally, we introduce an iterated linear programming approach to enforce the realizability of the anonymized degree sequences. The efficacy of the method is assessed through an extensive set of experiments on synthetic and real-world graphs. | @cite_7 considered a scenario in which the level of privacy concern of the different nodes of a network varies, i.e., only a subset of nodes of the networks is anonymized. Other researchers, on the other hand, focused on stricter definitions of @math -anonymity, where the amount of structural information available to the attacker ranges from the immediate neighborhood of a node to the whole graph structure @cite_19 @cite_33 @cite_10 @cite_25 @cite_12 . However, it is worth noting that the more structural information we take into account during the anonymization process, the more noise we need to add to the original graph, and the less informative the resulting anonymized graph will be. | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_19",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2096296626",
"2116787228",
"",
"2128248866",
"2032186932",
"2153689444"
],
"abstract": [
"Recently, as more and more social network data has been published in one way or another, preserving privacy in publishing social network data becomes an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Unfortunately, most of the previous studies on privacy preservation can deal with relational data only, and cannot be applied to social network data. In this paper, we take an initiative towards preserving privacy in social network data. We identify an essential type of privacy attacks: neighborhood attacks. If an adversary has some knowledge about the neighbors of a target victim and the relationship among the neighbors, the victim may be re-identified from a social network even if the victim's identity is preserved using the conventional anonymization techniques. We show that the problem is challenging, and present a practical solution to battle neighborhood attacks. The empirical study indicates that anonymized social networks generated by our method can still be used to answer aggregate network queries with high accuracy.",
"In recent years, concerns of privacy have become more prominent for social networks. Anonymizing a graph meaningfully is a challenging problem, as the original graph properties must be preserved as well as possible. We introduce a generalization of the degree anonymization problem posed by Liu and Terzi. In this problem, our goal is to anonymize a given subset of nodes while adding the fewest possible number of edges. The main contribution of this paper is an efficient algorithm for this problem by exploring its connection with the degree-constrained subgraph problem. Our experimental results show that our algorithm performs very well on many instances of social network data.",
"",
"The growing popularity of social networks has generated interesting data management and data mining problems. An important concern in the release of these data for study is their privacy, since social networks usually contain personal information. Simply removing all identifiable personal information (such as names and social security number) before releasing the data is insufficient. It is easy for an attacker to identify the target by performing different structural queries. In this paper we propose k-automorphism to protect against multiple structural attacks and develop an algorithm (called KM) that ensures k-automorphism. We also discuss an extension of KM to handle \"dynamic\" releases of the data. Extensive experiments show that the algorithm performs well in terms of protection it provides.",
"Serious concerns on privacy protection in social networks have been raised in recent years; however, research in this area is still in its infancy. The problem is challenging due to the diversity and complexity of graph data, on which an adversary can use many types of background knowledge to conduct an attack. One popular type of attacks as studied by pioneer work [2] is the use of embedding subgraphs. We follow this line of work and identify two realistic targets of attacks, namely, NodeInfo and LinkInfo. Our investigations show that k-isomorphism, or anonymization by forming k pairwise isomorphic subgraphs, is both sufficient and necessary for the protection. The problem is shown to be NP-hard. We devise a number of techniques to enhance the anonymization efficiency while retaining the data utility. A compound vertex ID mechanism is also introduced for privacy preservation over multiple data releases. The satisfactory performance on a number of real datasets, including HEP-Th, EUemail and LiveJournal, illustrates that the high symmetry of social networks is very helpful in mitigating the difficulty of the problem.",
"Recently, more and more social network data have been published in one way or another. Preserving privacy in publishing social network data becomes an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Unfortunately, most of the previous studies on privacy preservation data publishing can deal with relational data only, and cannot be applied to social network data. In this paper, we take an initiative toward preserving privacy in social network data. Specifically, we identify an essential type of privacy attacks: neighborhood attacks. If an adversary has some knowledge about the neighbors of a target victim and the relationship among the neighbors, the victim may be re-identified from a social network even if the victim’s identity is preserved using the conventional anonymization techniques. To protect privacy against neighborhood attacks, we extend the conventional k-anonymity and l-diversity models from relational data to social network data. We show that the problems of computing optimal k-anonymous and l-diverse social networks are NP-hard. We develop practical solutions to the problems. The empirical study indicates that the anonymized social network data by our methods can still be used to answer aggregate network queries with high accuracy."
]
} |
1503.06489 | 2098519941 | We study student behavior and performance in two Massive Open Online Courses (MOOCs). In doing so, we present two frameworks by which video-watching clickstreams can be represented: one based on the sequence of events created, and another on the sequence of positions visited. With the event-based framework, we extract recurring subsequences of student behavior, which contain fundamental characteris- tics such as reflecting (i.e., repeatedly playing and pausing) and revising (i.e., plays and skip backs). We find that some of these behaviors are significantly associated with whether a user will be Correct on First Attempt (CFA) or not in answering quiz questions. With the position-based framework, we then devise models for performance. In evaluating these through CFA prediction, we find that three of them can substantially improve prediction quality in terms of accuracy and F1, which underlines the ability to relate behavior to performance. Since our prediction considers videos individually, these benefits also suggest that our models are useful in situations where there is limited training data, e.g., for early detection or in short courses. | MOOC studies . With the proliferation of MOOC in recent years, there have been a number of analytical studies on these platforms. Some have focused on a more general analysis of all learning modes, @cite_0 @cite_10 studied learner engagement variation over time and across courses. Others have focused on specific modes, in terms of forums, @cite_2 analyzed the decline in participation over 73 courses. Our work is fundamentally different from these works in that is explores the between behavior with two modes: video and assessment. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_2"
],
"mid": [
"2051963836",
"",
"2122588206"
],
"abstract": [
"The Web has enabled one of the most visible recent developments in education---the deployment of massive open online courses. With their global reach and often staggering enrollments, MOOCs have the potential to become a major new mechanism for learning. Despite this early promise, however, MOOCs are still relatively unexplored and poorly understood. In a MOOC, each student's complete interaction with the course materials takes place on the Web, thus providing a record of learner activity of unprecedented scale and resolution. In this work, we use such trace data to develop a conceptual framework for understanding how users currently engage with MOOCs. We develop a taxonomy of individual behavior, examine the different behavioral patterns of high- and low-achieving students, and investigate how forum participation relates to other parts of the course. We also report on a large-scale deployment of badges as incentives for engagement in a MOOC, including randomized experiments in which the presentation of badges was varied across sub-populations. We find that making badges more salient produced increases in forum engagement.",
"",
"In massive open online courses (MOOCs), peer grading serves as a critical tool for scaling the grading of complex, open-ended assignments to courses with tens or hundreds of thousands of students. But despite promising initial trials, it does not always deliver accurate results compared to human experts. In this paper, we develop algorithms for estimating and correcting for grader biases and reliabilities, showing significant improvement in peer grading accuracy on real data with 63,199 peer grades from Coursera's HCI course offerings --- the largest peer grading networks analysed to date. We relate grader biases and reliabilities to other student factors such as student engagement, performance as well as commenting style. We also show that our model can lead to more intelligent assignment of graders to gradees."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | Aqua @cite_24 , Crowds @cite_26 , LAP @cite_28 , ShadowWalker @cite_68 , Tarzan @cite_33 , and Tor @cite_34 belong to the first category of systems: they provide an anonymous proxy for real-time Web browsing, but they do not protect against an adversary who controls the network, many of the clients, and some of the nodes on a victim's path through the network. Even providing a formal definition of anonymity for low-latency systems is challenging @cite_54 and such definitions typically do not capture the need to protect against timing attacks. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_54",
"@cite_34",
"@cite_24",
"@cite_68"
],
"mid": [
"1978884755",
"2163674653",
"2011441851",
"2280235587",
"1655958391",
"1975016298",
"2052518690"
],
"abstract": [
"In this paper we introduce a system called Crowds for protecting users' anonymity on the world-wide-web. Crowds, named for the notion of “blending into a crowd,” operates by grouping users into a large and geographically diverse group (crowd) that collectively issues requests on behalf of its members. Web servers are unable to learn the true source of a request because it is equally likely to have originated from any member of the crowd, and even collaborating crowd members cannot distinguish the originator of a request from a member who is merely forwarding the request on behalf of another. We describe the design, implementation, security, performance, and scalability of our system. Our security analysis introduces degrees of anonymity as an important tool for describing and proving anonymity properties.",
"Tarzan is a peer-to-peer anonymous IP network overlay. Because it provides IP service, Tarzan is general-purpose and transparent to applications. Organized as a decentralized peer-to-peer overlay, Tarzan is fault-tolerant, highly scalable, and easy to manage.Tarzan achieves its anonymity with layered encryption and multi-hop routing, much like a Chaumian mix. A message initiator chooses a path of peers pseudo-randomly through a restricted topology in a way that adversaries cannot easily influence. Cover traffic prevents a global observer from using traffic analysis to identify an initiator. Protocols toward unbiased peer-selection offer new directions for distributing trust among untrusted entities.Tarzan provides anonymity to either clients or servers, without requiring that both participate. In both cases, Tarzan uses a network address translator (NAT) to bridge between Tarzan hosts and oblivious Internet hosts.Measurements show that Tarzan imposes minimal overhead over a corresponding non-anonymous overlay route.",
"Popular anonymous communication systems often require sending packets through a sequence of relays on dilated paths for strong anonymity protection. As a result, increased end-to-end latency renders such systems inadequate for the majority of Internet users who seek an intermediate level of anonymity protection while using latency-sensitive applications, such as Web applications. This paper serves to bridge the gap between communication systems that provide strong anonymity protection but with intolerable latency and non-anonymous communication systems by considering a new design space for the setting. More specifically, we explore how to achieve near-optimal latency while achieving an intermediate level of anonymity with a weaker yet practical adversary model (i.e., protecting an end-host's identity and location from servers) such that users can choose between the level of anonymity and usability. We propose Lightweight Anonymity and Privacy (LAP), an efficient network-based solution featuring lightweight path establishment and stateless communication, by concealing an end-host's topological location to enhance anonymity against remote tracking. To show practicality, we demonstrate that LAP can work on top of the current Internet and proposed future Internet architectures.",
"",
"We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design by adding perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for location-hidden services via rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than 30 nodes. We close with a list of open problems in anonymous communication.",
"Existing IP anonymity systems tend to sacrifice one of low latency, high bandwidth, or resistance to traffic-analysis. High-latency mix-nets like Mixminion batch messages to resist traffic-analysis at the expense of low latency. Onion routing schemes like Tor deliver low latency and high bandwidth, but are not designed to withstand traffic analysis. Designs based on DC-nets or broadcast channels resist traffic analysis and provide low latency, but are limited to low bandwidth communication. In this paper, we present the design, implementation, and evaluation of Aqua, a high-bandwidth anonymity system that resists traffic analysis. We focus on providing strong anonymity for BitTorrent, and evaluate the performance of Aqua using traces from hundreds of thousands of actual BitTorrent users. We show that Aqua achieves latency low enough for efficient bulk TCP flows, bandwidth sufficient to carry BitTorrent traffic with reasonable efficiency, and resistance to traffic analysis within anonymity sets of hundreds of clients. We conclude that Aqua represents an interesting new point in the space of anonymity network designs.",
"Peer-to-peer approaches to anonymous communication promise to eliminate the scalability concerns and central vulnerability points of current networks such as Tor. However, the P2P setting introduces many new opportunities for attack, and previous designs do not provide an adequate level of anonymity. We propose ShadowWalker: a new low-latency P2P anonymous communication system, based on a random walk over a redundant structured topology. We base our design on shadows that redundantly check and certify neighbor information; these certifications enable nodes to perform random walks over the structured topology while avoiding route capture and other attacks. We analytically calculate the anonymity provided by ShadowWalker and show that it performs well for moderate levels of attackers, and is much better than the state of the art. We also design an extension that improves forwarding performance at a slight anonymity cost, while at the same time protecting against selective DoS attacks. We show that our system has manageable overhead and can handle moderate churn, making it an attractive new design for P2P anonymous communication."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | David Chaum's cascade'' mix networks were one of the first systems devised with the specific goal of defending against traffic-analysis attacks @cite_65 . Since then, there have been a number of mix-net-style systems proposed, many of which explicitly weaken their protections against a near omni-present adversary @cite_64 to improve prospects for practical usability (i.e., for email traffic) @cite_35 . In contrast, attempts to provide very strong anonymity guarantees at the price of usability for interactive applications. | {
"cite_N": [
"@cite_35",
"@cite_64",
"@cite_65"
],
"mid": [
"2150248082",
"1486928190",
"2103647628"
],
"abstract": [
"We present Mixminion, a message-based anonymous remailer protocol with secure single-use reply blocks. Mix nodes cannot distinguish Mixminion forward messages from reply messages, so forward and reply messages share the same anonymity set. We add directory servers that allow users to learn public keys and performance statistics of participating remailers, and we describe nymservers that provide long-term pseudonyms using single-use reply blocks as a primitive. Our design integrates link encryption between remailers to provide forward anonymity. Mixminion works in a real-world Internet environment, requires little synchronization or coordination between nodes, and protects against known anonymity-breaking attacks as well as or better than other systems with similar design parameters.",
"The previous talk was about trying to get entropy, and I’m going to talk about ignoring entropy. It’s natural to talk about anonymity in a workshop about brief encounters and security protocols, I would say anonymity is the quintessence of brief encounter, you only have a brief encounter if in fact you are anonymous, so you need to be anonymous to guarantee that it is brief, otherwise it is at best pseudonymous, because you’re preserving state from one instance of communication to another.",
"A technique based on public key cryptography is presented that allows an electronic mail system to hide who a participant communicates with as well as the content of the communication - in spite of an unsecured underlying telecommunication system. The technique does not require a universally trusted authority. One correspondent can remain anonymous to a second, while allowing the second to respond via an untraceable return address. The technique can also be used to form rosters of untraceable digital pseudonyms from selected applications. Applicants retain the exclusive ability to form digital signatures corresponding to their pseudonyms. Elections in which any interested party can verify that the ballots have been properly counted are possible if anonymously mailed ballots are signed with pseudonyms from a roster of registered voters. Another use allows an individual to correspond with a record-keeping organization under a unique pseudonym, which appears in a roster of acceptable clients."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | E-voting systems (also called verifiable shuffles'') achieve the sort of privacy properties that offers, and some systems even provide stronger voting-specific guarantees (receipt-freeness, proportionality, etc.), though most e-voting systems cannot provide the forward security property that offers () @cite_17 @cite_46 @cite_69 @cite_53 @cite_36 @cite_74 @cite_6 . | {
"cite_N": [
"@cite_69",
"@cite_36",
"@cite_53",
"@cite_6",
"@cite_74",
"@cite_46",
"@cite_17"
],
"mid": [
"2101770573",
"1608539542",
"1801339841",
"1600530176",
"2145801920",
"1522388518",
"40134741"
],
"abstract": [
"We present a mathematical construct which provides a cryptographic protocol to verifiably shuffle a sequence of k modular integers, and discuss its application to secure, universally verifiable, multi-authority election schemes. The output of the shuffle operation is another sequence of k modular integers, each of which is the same secret power of a corresponding input element, but the order of elements in the output is kept secret. Though it is a trivial matter for the \"shuffler\" (who chooses the permutation of the elements to be applied) to compute the output from the input, the construction is important because it provides a linear size proof of correctness for the output sequence (i.e. a proof that it is of the form claimed) that can be checked by an arbitrary verifiers. The complexity of the protocol improves on that of Furukawa-Sako[16] both measured by number of exponentiations and by overall size.The protocol is shown to be honest-verifier zeroknowledge in a special case, and is computational zeroknowledge in general. On the way to the final result, we also construct a generalization of the well known Chaum-Pedersen protocol for knowledge of discrete logarithm equality [10], [7]. In fact, the generalization specializes exactly to the Chaum-Pedersen protocol in the case k = 2. This result may be of interest on its own.An application to electronic voting is given that matches the features of the best current protocols with significant efficiency improvements. An alternative application to electronic voting is also given that introduces an entirely new paradigm for achieving Universally Verifiable elections.",
"In this paper, we propose a scheme to simultaneously prove the correctness of both shuffling and decryption. Our scheme is the most efficient of all previous schemes, as a total, in proving the correctness of both shuffling and decryption of ElGamal ciphertexts. We also propose a formal definition for the core requirement of unlinkability in verifiable shuffle-decryption, and then prove that our scheme satisfies this requirement. The proposed definition may be also useful for proving the security of verifiable shuffle-decryption, hybrid mix network, and other mix-nets.",
"A shuffle is a permutation and rerandomization of a set of ciphertexts. Among other things, it can be used to construct mix-nets that are used in anonymization protocols and voting schemes. While shuffling is easy, it is hard for an outsider to verify that a shuffle has been performed correctly. We suggest two efficient honest verifier zero-knowledge (HVZK) arguments for correctness of a shuffle. Our goal is to minimize round-complexity and at the same time have low communicational and computational complexity. The two schemes we suggest are both 3-move HVZK arguments for correctness of a shuffle. We first suggest a HVZK argument based on homomorphic integer commitments, and improve both on round complexity, communication complexity and computational complexity in comparison with state of the art. The second HVZK argument is based on homomorphic commitments over finite fields. Here we improve on the computational complexity and communication complexity when shuffling large ciphertexts.",
"We describe how to use Rabin’s “split-value” representations, originally developed for use in secure auctions, to efficiently implement end-to-end verifiable voting. We propose a simple and very elegant combination of split-value representations with “randomized partial checking” (due to [16]).",
"A shuffle consists of a permutation and re-encryption of a set of input ciphertexts. One application of shuffles is to build mix-nets. We suggest an honest verifier zero-knowledge argument for the correctness of a shuffle of homomorphic encryptions. Our scheme is more efficient than previous schemes both in terms of communication and computation. The honest verifier zero-knowledge argument has a size that is independent of the actual cryptosystem being used and will typically be smaller than the size of the shuffle itself. Moreover, our scheme is well suited for the use of multi-exponentiation and batch-verification techniques. Additionally, we suggest a more efficient honest verifier zero-knowledge argument for a commitment containing a permutation of a set of publicly known messages. We also suggest an honest verifier zero-knowledge argument for the correctness of a combined shuffle-and-decrypt operation that can be used in connection with decrypting mix-nets based on ElGamal encryption. All our honest verifier zero-knowledge arguments can be turned into honest verifier zero-knowledge proofs. We use homomorphic commitments as an essential part of our schemes. When the commitment scheme is statistically hiding we obtain statistical honest verifier zero-knowledge arguments; when the commitment scheme is statistically binding, we obtain computational honest verifier zero-knowledge proofs.",
"",
"Voting with cryptographic auditing, sometimes called open-audit voting, has remained, for the most part, a theoretical endeavor. In spite of dozens of fascinating protocols and recent ground-breaking advances in the field, there exist only a handful of specialized implementations that few people have experienced directly. As a result, the benefits of cryptographically audited elections have remained elusive. We present Helios, the first web-based, open-audit voting system. Helios is publicly accessible today: anyone can create and run an election, and any willing observer can audit the entire process. Helios is ideal for on-line software communities, local clubs, student government, and other environments where trustworthy, secret-ballot elections are required but coercion is not a serious concern. With Helios, we hope to expose many to the power of open-audit elections."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | For example, the verifiable shuffle protocol of Bayer and Groth @cite_38 is one of the most efficient in the literature. Their shuffle implementation, when used with an anonymity set of size @math , requires @math group exponentiations per server and data transfer @math . In addition, messages must be small enough to be encoded in single group elements (a few hundred bytes at most). In contrast, our protocol requires @math AES operations and data transfer @math , where @math is the size of the database table. When messages are short and when the writer reader ratio is high, the Bayer-Groth mix may be faster than our system. In contrast, when messages are long and when the writer reader ratio is low (i.e., @math ), our system is faster. | {
"cite_N": [
"@cite_38"
],
"mid": [
"111294696"
],
"abstract": [
"Mix-nets are used in e-voting schemes and other applications that require anonymity. Shuffles of homomorphic encryptions are often used in the construction of mix-nets. A shuffle permutes and re-encrypts a set of ciphertexts, but as the plaintexts are encrypted it is not possible to verify directly whether the shuffle operation was done correctly or not. Therefore, to prove the correctness of a shuffle it is often necessary to use zero-knowledge arguments. We propose an honest verifier zero-knowledge argument for the correctness of a shuffle of homomorphic encryptions. The suggested argument has sublinear communication complexity that is much smaller than the size of the shuffle itself. In addition the suggested argument matches the lowest computation cost for the verifier compared to previous work and also has an efficient prover. As a result our scheme is significantly more efficient than previous zero-knowledge schemes in literature. We give performance measures from an implementation where the correctness of a shuffle of 100,000 ElGamal ciphertexts is proved and verified in around 2 minutes."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | Chaum's Dining Cryptographers network (DC-net) is an information-theoretically secure anonymous broadcast channel @cite_9 . A DC-net provides the same strong anonymity properties as does, but it requires every user of a DC-net to participate in every run of the protocol. As the number of users grows, this quickly becomes impractical. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2087811006"
],
"abstract": [
"Keeping confidential who sends which messages, in a world where any physical transmission can be traced to its origin, seems impossible. The solution presented here is unconditionally or cryptographically secure, depending on whether it is based on one-time-use keys or on public keys, respectively. It can be adapted to address efficiently a wide variety of practical considerations."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | The Dissent @cite_86 system introduced the idea of using partially trusted servers to make DC-nets practical in distributed networks. Dissent requires weaker trust assumptions than our three-server protocol does but it requires clients to send @math bits to each server per time epoch (compared with our @math ). Also, excluding a single disruptor in a 1,000-client deployment takes over an hour. In contrast, can excludes disruptors as fast as it processes write requests (tens to hundreds per second, depending on the database size). Recent work @cite_87 uses zero-knowledge techniques to speed up disruption resistance in Dissent (building on ideas of Golle and Juels @cite_19 ). Unfortunately, these techniques limit the system's end to end-throughput end-to-end throughput to 30 KB s, compared with 's 450+ MB s. | {
"cite_N": [
"@cite_19",
"@cite_86",
"@cite_87"
],
"mid": [
"1952958290",
"2099858845",
"2200869402"
],
"abstract": [
"Dining cryptographers networks (or DC-nets) are a privacy-preserving primitive devised by Chaum for anonymous message publication. A very attractive feature of the basic DC-net is its non-interactivity. Subsequent to key establishment, players may publish their messages in a single broadcast round, with no player-to-player communication. This feature is not possible in other privacy-preserving tools like mixnets. A drawback to DC-nets, however, is that malicious players can easily jam them, i.e., corrupt or block the transmission of messages from honest parties, and may do so without being traced.",
"Current anonymous communication systems make a trade-off between weak anonymity among many nodes, via onion routing, and strong anonymity among few nodes, via DC-nets. We develop novel techniques in Dissent, a practical group anonymity system, to increase by over two orders of magnitude the scalability of strong, traffic analysis resistant approaches. Dissent derives its scalability from a client server architecture, in which many unreliable clients depend on a smaller and more robust, but administratively decentralized, set of servers. Clients trust only that at least one server in the set is honest, but need not know or choose which server to trust. Unlike the quadratic costs of prior peer-to-peer DC-nets schemes, Dissent's client server design makes communication and processing costs linear in the number of clients, and hence in anonymity set size. Further, Dissent's servers can unilaterally ensure progress, even if clients respond slowly or disconnect at arbitrary times, ensuring robustness against client churn, tail latencies, and DoS attacks. On DeterLab, Dissent scales to 5,000 online participants with latencies as low as 600 milliseconds for 600-client groups. An anonymous Web browsing application also shows that Dissent's performance suffices for interactive communication within smaller local-area groups.",
"Among anonymity systems, DC-nets have long held attraction for their resistance to traffic analysis attacks, but practical implementations remain vulnerable to internal disruption or \"jamming\" attacks, which require time-consuming detection procedures to resolve. We present Verdict, the first practical anonymous group communication system built using proactively verifiable DC-nets: participants use public-key cryptography to construct DC-net ciphertexts, and use zero-knowledge proofs of knowledge to detect and exclude misbehavior before disruption. We compare three alternative constructions for verifiable DC-nets: one using bilinear maps and two based on simpler ElGamal encryption. While verifiable DC-nets incur higher computational overheads due to the public-key cryptography involved, our experiments suggest that Verdict is practical for anonymous group messaging or microblogging applications, supporting groups of 100 clients at 1 second per round or 1000 clients at 10 seconds per round. Furthermore, we show how existing symmetric-key DC-nets can \"fall back\" to a verifiable DC-net to quickly identify misbehavior, speeding up previous detections schemes by two orders of magnitude."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | Herbivore scales DC-nets by dividing users into many small anonymity sets @cite_81 . creates a single large anonymity set, and thus enables every client to be anonymous amongst the entire set of honest clients. | {
"cite_N": [
"@cite_81"
],
"mid": [
"1834982738"
],
"abstract": [
"Anonymity is increasingly important for networked applications amidst concerns over censorship and privacy. In this paper, we describe Herbivore, a peer-to-peer, scalable, tamper-resilient communication system that provides provable anonymity and privacy. Building on dining cryptographer networks, Herbivore scales by partitioning the network into anonymizing cliques. Adversaries able to monitor all network traffic cannot deduce the identity of a sender or receiver beyond an anonymizing clique. In addition to strong anonymity, Herbivore simultaneously provides high efficiency and scalability, distinguishing it from other anonymous communication protocols. Performance measurements from a prototype implementation show that the system can achieve high bandwidths and low latencies when deployed over the Internet."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | Our DPF constructions make extensive use of prior work on private information retrieval (PIR) @cite_20 @cite_48 @cite_30 @cite_60 . Recent work demonstrates that it is possible to make theoretical PIR fast enough for practical use @cite_80 @cite_51 @cite_56 . Function secret sharing @cite_78 generalizes DPFs to allow sharing of more sophisticated functions (rather than just point functions). This more powerful primitive may prove useful for PIR and anonymous messaging applications in the future. | {
"cite_N": [
"@cite_30",
"@cite_78",
"@cite_60",
"@cite_48",
"@cite_56",
"@cite_80",
"@cite_51",
"@cite_20"
],
"mid": [
"171567834",
"644599125",
"139740867",
"74041048",
"",
"2136631923",
"2105037262",
"2073346043"
],
"abstract": [
"Alice wants to query a database but she does not want the database to learn what she is querying. She can ask for the entire database. Can she get her query answered with less communication? One model of this problem is Private Information Retrieval , henceforth PIR. We survey results obtained about the PIR model including partial answers to the following questions. (1) What if there are k non-communicating copies of the database but they are computationally unbounded? (2) What if there is only one copy of the database and it is computationally bounded?",
"Motivated by the goal of securely searching and updating distributed data, we introduce and study the notion of function secret sharing (FSS). This new notion is a natural generalization of distributed point functions (DPF), a primitive that was recently introduced by Gilboa and Ishai (Eurocrypt 2014). Given a positive integer (p 2 ) and a class ( F ) of functions (f: 0,1 ^n G ), where ( G ) is an Abelian group, a (p )-party FSS scheme for ( F ) allows one to split each (f F ) into (p ) succinctly described functions (f_i: 0,1 ^n G ), (1 i p ), such that: (1) ( _ i=1 ^p f_i=f ), and (2) any strict subset of the (f_i ) hides (f ). Thus, an FSS for ( F ) can be thought of as method for succinctly performing an “additive secret sharing” of functions from ( F ). The original definition of DPF coincides with a two-party FSS for the class of point functions, namely the class of functions that have a nonzero output on at most one input.",
"For x,y ∈ 0,1 *, the point function P x,y is defined by P x,y (x) = y and P x,y (x′) = 0|y| for all x′ ≠ x. We introduce the notion of a distributed point function (DPF), which is a keyed function family F k with the following property. Given x,y specifying a point function, one can efficiently generate a key pair (k 0,k 1) such that: (1) (F_ k_0 F_ k_1 =P_ x,y ), and (2) each of k 0 and k 1 hides x and y. Our main result is an efficient construction of a DPF under the (minimal) assumption that a one-way function exists.",
"",
"",
"The goal of Private Information Retrieval (PIR) is the ability to query a database successfully without the operator of the database server discovering which record(s) of the database the querier is interested in. There are two main classes of PIR protocols: those that provide privacy guarantees based on the computational limitations of servers (CPIR) and those that rely on multiple servers not colluding for privacy (IT-PIR). These two classes have different advantages and disadvantages that make them more or less attractive to designers of PIR-enabled privacy enhancing technologies.",
"Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. information-theoretic privacy protection, correctness in the face of servers that fail to respond or that respond incorrectly, and protection of sensitive data against the database servers themselves. In this paper, we improve on the robustness of PIR in a number of ways. First, we present a Byzantine-robust PIR protocol which provides information-theoretic privacy protection against coalitions of up to all but one of the responding servers, improving the previous result by a factor of 3. In addition, our protocol allows for more of the responding servers to return incorrect information while still enabling the user to compute the correct result. We then extend our protocol so that queries have information-theoretic protection if a limited number of servers collude, as before, but still retain computational protection if they all collude. We also extend the protocol to provide information-theoretic protection to the contents of the database against collusions of limited numbers of the database servers, at no additional communication cost or increase in the number of servers. All of our protocols retrieve a block of data with communication cost only O(lscr) times the size of the block, where lscr is the number of servers. Finally, we discuss our implementation of these protocols, and measure their performance in order to determine their practicality.",
"Publicly accessible databases are an indispensable resource for retrieving up-to-date information. But they also pose a significant risk to the privacy of the user, since a curious database operator can follow the user's queries and infer what the user is after. Indeed, in cases where the users' intentions are to be kept secret, users are often cautious about accessing the database. It can be shown that when accessing a single database, to completely guarantee the privacy of the user, the whole database should be down-loaded; namely n bits should be communicated (where n is the number of bits in the database). In this work, we investigate whether by replicating the database, more efficient solutions to the private retrieval problem can be obtained. We describe schemes that enable a user to access k replicated copies of a database ( k ≥2) and privately retrieve information stored in the database. This means that each individual server (holding a replicated copy of the database) gets no information on the identity of the item retrieved by the user. Our schemes use the replication to gain substantial saving. In particular, we present a two-server scheme with communication complexity O(n 1 3 )."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | @cite_55 consider symmetric PIR protocols, in which the servers prevent dishonest clients from learning about more than a single row of the database per query. The problem that consider is, in a way, the dual of the problem we address in , though their techniques do not appear to apply directly in our setting. | {
"cite_N": [
"@cite_55"
],
"mid": [
"2065265824"
],
"abstract": [
"Private information retrieval (PIR) schemes allow a user to retrieve the ith bit of an n-bit data string x, replicated in k?2 databases (in the information-theoretic setting) or in k?1 databases (in the computational setting), while keeping the value of i private. The main cost measure for such a scheme is its communication complexity. In this paper we introduce a model of symmetrically-private information retrieval (SPIR), where the privacy of the data, as well as the privacy of the user, is guaranteed. That is, in every invocation of a SPIR protocol, the user learns only a single physical bit of x and no other information about the data. Previously known PIR schemes severely fail to meet this goal. We show how to transform PIR schemes into SPIR schemes (with information-theoretic privacy), paying a constant factor in communication complexity. To this end, we introduce and utilize a new cryptographic primitive, called conditional disclosure of secrets, which we believe may be a useful building block for the design of other cryptographic protocols. In particular, we get a k-database SPIR scheme of complexity O(n1 (2k?1)) for every constant k?2 and an O(logn)-database SPIR scheme of complexity O(log2n·loglogn). All our schemes require only a single round of interaction, and are resilient to any dishonest behavior of the user. These results also yield the first implementation of a distributed version of (n1)-OT (1-out-of-n oblivious transfer) with information-theoretic security and sublinear communication complexity."
]
} |
1503.06115 | 1536141561 | This paper presents Riposte, a new system for anonymous broadcast messaging. Riposte is the first such system, to our knowledge, that simultaneously protects against traffic-analysis attacks, prevents anonymous denial-of-service by malicious clients, and scales to million-user anonymity sets. To achieve these properties, Riposte makes novel use of techniques used in systems for private information retrieval and secure multi-party computation. For latency-tolerant workloads with many more readers than writers (e.g. Twitter, Wikileaks), we demonstrate that a three-server Riposte cluster can build an anonymity set of 2,895,216 users in 32 hours. | Pynchon Gate @cite_16 builds a private point-to-point messaging system from mix-nets and PIR. Clients anonymously upload messages to email servers using a traditional mix-net and download messages from the email servers using a PIR protocol. could replace the mix-nets used in the Pynchon Gate system: clients could anonymously write their messages into the database using and could privately read incoming messages using PIR. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2134698082"
],
"abstract": [
"We describe the Pynchon Gate, a practical pseudonymous message retrieval system. Our design uses a simple distributed-trust private information retrieval protocol to prevent adversaries from linking recipients to their pseudonyms, even when some of the infrastructure has been compromised. This approach resists global traffic analysis significantly better than existing deployed pseudonymous email solutions, at the cost of additional bandwidth. We examine security concerns raised by our model, and propose solutions."
]
} |
1503.06171 | 1923408327 | The short-term forecasting of real-time locational marginal price (LMP) and network congestion is considered from a system operator perspective. A new probabilistic forecasting technique is proposed based on a multiparametric programming formulation that partitions the uncertainty parameter space into critical regions from which the conditional probability distribution of the real-time LMP congestion is obtained. The proposed method incorporates load generation forecast, time varying operation constraints, and contingency models. By shifting the computation cost associated with multiparametric programs offline, the online computation cost is significantly reduced. An online simulation technique by generating critical regions dynamically is also proposed, which results in several orders of magnitude improvement in the computational cost over standard Monte Carlo methods. | There are several prior studies on LMP congestion forecasting from the system operator perspective. The proposed technique in @cite_16 employs an online Monte Carlo sampling technique that, for each Monte Carlo sample path, solves an optimal power flow (OPF) problem, which is computationally expensive. Monte Carlo technique was also used in @cite_3 where a reduction of the random variable dimension is made using a nonhomogeneous Markov chain model based on a partition of the system state space. | {
"cite_N": [
"@cite_16",
"@cite_3"
],
"mid": [
"2165594353",
"2010590907"
],
"abstract": [
"This paper introduces a probabilistic method for short-term transmission congestion forecasting, which is recently developed by EPRI. The proposed method applies the sequential Monte Carlo simulation (MCS) in a probabilistic load flow as the conceptual framework, adds all the significant uncertainties and their probability distributions to be modeled, develops the models, and most importantly specifies how to accurately model the key input assumptions in order to derive valid confidence levels of the forecasted congestion variables. The developed probabilistic method is successfully applied to the four-area WECC equivalent system. Focus is on the confidence levels of making such forecasts, so that a window of forecast-ability is defined, beyond which any forecast would be considered to contain little actionable information. Within the window of forecast-ability, the probabilistic forecasts of congestion would provide confidence limits and information for ranking the potential benefits of alleviating congestion at the various transmission bottlenecks.",
"The problem of forecasting the real-time locational marginal price (LMP) by a system operator is considered. A new probabilistic forecasting framework is developed based on a time in-homogeneous Markov chain representation of the realtime LMP calculation. By incorporating real-time measurements and forecasts, the proposed forecasting algorithm generates the posterior probability distribution of future locational marginal prices with forecast horizons of 6–8 hours. Such a short-term forecast provides actionable information for market participants and system operators. A Monte Carlo technique is used to estimate the posterior transition probabilities of the Markov chain, and the real-time LMP forecast is computed by the product of the estimated transition matrices. The proposed forecasting algorithm is tested on the PJM 5-bus system. Simulations show marked improvements over benchmark techniques."
]
} |
1503.06171 | 1923408327 | The short-term forecasting of real-time locational marginal price (LMP) and network congestion is considered from a system operator perspective. A new probabilistic forecasting technique is proposed based on a multiparametric programming formulation that partitions the uncertainty parameter space into critical regions from which the conditional probability distribution of the real-time LMP congestion is obtained. The proposed method incorporates load generation forecast, time varying operation constraints, and contingency models. By shifting the computation cost associated with multiparametric programs offline, the online computation cost is significantly reduced. An online simulation technique by generating critical regions dynamically is also proposed, which results in several orders of magnitude improvement in the computational cost over standard Monte Carlo methods. | A particularly relevant prior work is @cite_4 where the authors consider the problem of LMP congestion forecasting from the vantage point of an external observer who has access to publicly available historical data only. Our work, in contrast, considers the forecasting problem from the vantage point of a system operator who has access to the system operating condition at the time of forecasting. In terms of forecasting methodology, the main difference between our approach and that in @cite_4 lies in the different uses of conditioning in evaluating the conditional probability distribution of LMP congestion. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2166805231"
],
"abstract": [
"Short-term congestion forecasting is highly important for market participants in wholesale power markets that use locational marginal prices (LMPs) to manage congestion. Accurate congestion forecasting facilitates market traders in bidding and trading activities and assists market operators in system planning. This study proposes a new short-term forecasting algorithm for congestion, LMPs, and other power system variables based on the concept of system patterns - combinations of status flags for generating units and transmission lines. The advantage of this algorithm relative to standard statistical forecasting methods is that structural aspects underlying power market operations are exploited to reduce forecast error. The advantage relative to previously proposed structural forecasting methods is that data requirements are substantially reduced. Forecasting results based on a NYISO case study demonstrate the feasibility and accuracy of the proposed algorithm."
]
} |
1503.06171 | 1923408327 | The short-term forecasting of real-time locational marginal price (LMP) and network congestion is considered from a system operator perspective. A new probabilistic forecasting technique is proposed based on a multiparametric programming formulation that partitions the uncertainty parameter space into critical regions from which the conditional probability distribution of the real-time LMP congestion is obtained. The proposed method incorporates load generation forecast, time varying operation constraints, and contingency models. By shifting the computation cost associated with multiparametric programs offline, the online computation cost is significantly reduced. An online simulation technique by generating critical regions dynamically is also proposed, which results in several orders of magnitude improvement in the computational cost over standard Monte Carlo methods. | The authors of @cite_4 introduce and exploit the decomposition of a multi-dimensional load space into critical regions (called system pattern regions) that are estimated using historical data The estimated critical regions are therefore random quantities. . The work in @cite_4 aims to address the following issue: Given a possible future point @math in a multi-dimensional load space, what is the probability distribution of the estimated critical regions that contain @math ? Since each critical region corresponds to a specific LMP congestion, the technique in @cite_4 gives a heuristic estimate of the probability distribution of LMP congestion by conditioning on load @math at some future point in time. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2166805231"
],
"abstract": [
"Short-term congestion forecasting is highly important for market participants in wholesale power markets that use locational marginal prices (LMPs) to manage congestion. Accurate congestion forecasting facilitates market traders in bidding and trading activities and assists market operators in system planning. This study proposes a new short-term forecasting algorithm for congestion, LMPs, and other power system variables based on the concept of system patterns - combinations of status flags for generating units and transmission lines. The advantage of this algorithm relative to standard statistical forecasting methods is that structural aspects underlying power market operations are exploited to reduce forecast error. The advantage relative to previously proposed structural forecasting methods is that data requirements are substantially reduced. Forecasting results based on a NYISO case study demonstrate the feasibility and accuracy of the proposed algorithm."
]
} |
1503.06171 | 1923408327 | The short-term forecasting of real-time locational marginal price (LMP) and network congestion is considered from a system operator perspective. A new probabilistic forecasting technique is proposed based on a multiparametric programming formulation that partitions the uncertainty parameter space into critical regions from which the conditional probability distribution of the real-time LMP congestion is obtained. The proposed method incorporates load generation forecast, time varying operation constraints, and contingency models. By shifting the computation cost associated with multiparametric programs offline, the online computation cost is significantly reduced. An online simulation technique by generating critical regions dynamically is also proposed, which results in several orders of magnitude improvement in the computational cost over standard Monte Carlo methods. | In contrast to @cite_4 , our objective is to forecast directly the probability distribution of future LMP congestion, . Because a system operator has access to all private and public information about system conditions, the critical regions are computed exactly via a multiparametric program. This allows us to incorporate load and generation forecasts and obtain the (conditional) probability distribution of future LMP congestion directly. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2166805231"
],
"abstract": [
"Short-term congestion forecasting is highly important for market participants in wholesale power markets that use locational marginal prices (LMPs) to manage congestion. Accurate congestion forecasting facilitates market traders in bidding and trading activities and assists market operators in system planning. This study proposes a new short-term forecasting algorithm for congestion, LMPs, and other power system variables based on the concept of system patterns - combinations of status flags for generating units and transmission lines. The advantage of this algorithm relative to standard statistical forecasting methods is that structural aspects underlying power market operations are exploited to reduce forecast error. The advantage relative to previously proposed structural forecasting methods is that data requirements are substantially reduced. Forecasting results based on a NYISO case study demonstrate the feasibility and accuracy of the proposed algorithm."
]
} |
1503.06171 | 1923408327 | The short-term forecasting of real-time locational marginal price (LMP) and network congestion is considered from a system operator perspective. A new probabilistic forecasting technique is proposed based on a multiparametric programming formulation that partitions the uncertainty parameter space into critical regions from which the conditional probability distribution of the real-time LMP congestion is obtained. The proposed method incorporates load generation forecast, time varying operation constraints, and contingency models. By shifting the computation cost associated with multiparametric programs offline, the online computation cost is significantly reduced. An online simulation technique by generating critical regions dynamically is also proposed, which results in several orders of magnitude improvement in the computational cost over standard Monte Carlo methods. | Several techniques have been proposed to approximate the LMP distribution at a future time. In @cite_8 , a probabilistic LMP forecasting approach was proposed based on attaching a Gaussian distribution to a point estimate. The advantage is that the technique can incorporate various point forecasting methods. The disadvantage, on the other hand, is that network effects are not easy to incorporate. The authors of @cite_6 and @cite_11 approximate the probabilistic distribution of LMP using higher order moments and cumulants. These methods are based on representing the probability distribution as an infinite series involving moments or cumulants. In practice, computing or estimating higher order moments and cumulants are very difficult; lower order approximations are necessary. | {
"cite_N": [
"@cite_11",
"@cite_6",
"@cite_8"
],
"mid": [
"",
"2545256129",
"2124895057"
],
"abstract": [
"",
"The Locational Marginal Pricing (LMP) is a dominant approach in energy market operation and planning to identify the nodal price and to manage the transmission congestion. Considering the uncertainties associated with the input data of load flow, the LMP can be considered as a stochastic variable. Therefore calculation of LMP as a random variable can be very useful in power market forecasting studies. In this paper, LMP has been calculated with Cumulant & Gram-Charlier (CGC) method and compared with Monte Carlo and point estimation method. This method combines the concept of Cumulants and Gram-Charlier expansion theory to obtain Probabilistic Distribution Functions (PDF) and Cumulative Distribution Function (CDF) of LMP. It has significantly reduced the computational time while maintaining a high degree of accuracy. The method described in this paper applied to PJM test system. The sensitivity of LMP with variation of load has also been calculated and compared with deterministic calculation.",
"In power market studies, the forecast of locational marginal price (LMP) relies on the load forecasting results from the viewpoint of planning. It is well known that short-term load forecasting results always carry certain degree of errors mainly due to the random nature of the load. At the same time, LMP step changes occur at critical load levels (CLLs). Therefore, it is interesting to investigate the impact of load forecasting uncertainty on LMP. With the assumption of a certain probability distribution of the actual load, this paper proposes the concept of probabilistic LMP and formulates the probability mass function of this random variable. The expected value of probabilistic LMP is then derived, as well as the lower and upper bound of its sensitivity. In addition, two useful curves, alignment probability of deterministic LMP versus forecasted load and expected value of probabilistic LMP versus forecasted load, are presented. The first curve is designed to identify the probability that the forecasted price in a deterministic LMP matches the actual price at the forecasted load level. The second curve is demonstrated to be smooth and therefore eliminates the step changes in deterministic LMP forecasting. This helps planners avoid the possible sharp changes during decision-making process. The proposed concept and method are illustrated with a modified PJM five-bus system and the IEEE 118-bus system."
]
} |
1503.05171 | 2153538292 | Software release development process, that we refer to as "release trajectory", involves development activities that are usually sorted in different categories, such as incorporating new features, improving software, or fixing bugs, and associated to "issues". Release trajectory management is a difficult and crucial task. Managers must be aware of every aspect of the development process for managing the software-related issues. Issue Tracking Systems (ITS) play a central role in supporting the management of release trajectory. These systems, which support reporting and tracking issues of different kinds (such as "bug", "feature", "improvement", etc.), record rich data about the software development process. Yet, recorded historical data in ITS are still not well-modeled for supporting practical needs of release trajectory management. In this paper, we describe a sequence analysis approach for modeling and analyzing releases' trajectories, using the tracking process of reported issues. Release trajectory analysis is based on the categories of tracked issues and their temporal changing, and aims to address important questions regarding the co-habitation of unresolved issues, the transitions between different statuses in release trajectory, the recurrent patterns of release trajectories, and the properties of a release trajectory. | In a different direction, D'Ambros and Lanza @cite_11 propose a visualization to analyze and characterize the evolution of software entities, at different granularity levels. Besides, there is a family of research papers that focus on analyzing software change logs for identifying commits that contain tangled changes @cite_1 or peripheral modifications @cite_17 , providing further insights on the nature of commits @cite_12 @cite_8 , or for identifying the commit window of a release @cite_29 . | {
"cite_N": [
"@cite_11",
"@cite_8",
"@cite_29",
"@cite_1",
"@cite_12",
"@cite_17"
],
"mid": [
"2103526356",
"",
"1984103988",
"2137444776",
"2018638699",
"2039432697"
],
"abstract": [
"Versioning systems such as CVS exhibit a large potential to investigate and understand the evolution of large software systems. Bug reporting systems such as Bugzilla help to understand which parts of the system are affected by problems. In this article, we present a novel visual approach to uncover the relationship between evolving software and the way it is affected by software bugs. By visually putting the two aspects close to each other, we can characterize the evolution of software artifacts. We validate our approach on 3 very large open source software systems.",
"",
"The paper presents an empirical study on the release naming and structure in three open source projects: Google Chrome, GNU gcc, and Subversion. Their commonality and variability are discussed. An approach is developed that establishes the mapping from a particular release (major or minor) to the specific earliest and latest revisions, i.e., a commit window of a release, in the source control repository. For example, the major release 25.0 in Chrome is mapped to the earliest revision 157687 and latest revision 165096 in the trunk. This mapping between releases and commits would facilitate a systematic choice of history in units of the project evolution scale (i.e., commits that constitute a software release). A projected application is in forming a training set for a source-code change prediction model, e.g., using the association rule mining or machine learning techniques, commits from the source code history are needed.",
"Although there is a principle that states a commit should only include changes for a single task, it is not always respected by developers. This means that code repositories often include commits that contain tangled changes. The presence of such tangled changes hinders analyzing code repositories because most mining software repository (MSR) approaches are designed with the assumption that every commit includes only changes for a single task. In this paper, we propose a technique to inform developers that they are in the process of committing tangled changes. The proposed technique utilizes the changes included in the past commits to judge whether a given commit includes tangled changes. If it determines that the proposed commit may include tangled changes, it offers suggestions on how the tangled changes can be split into a set of untangled changes.",
"Information contained in versioning system commits has been frequently used to support software evolution research. Concomitantly, some researchers have tried to relate commits to certain activities, e.g., large commits are more likely to be originated from code management activities, while small ones are related to development activities. However, these characterizations are vague, because there is no consistent definition of what is a small or a large commit. In this paper, we study the nature of commits in two dimensions. First, we define the size of commits in terms of number of files, and then we classify commits based on the content of their comments. To perform this study, we use the history log of nine large open source projects.",
"In the last decade, a variety of studies on mining software repositories has been conducted. Mining repositories has a potential to obtain useful knowledge for the future development and maintenance. When software repositories are mined, large commits in them are often excluded from mining targets because large commits include merging and we believe that large commits include peripheral modifications, which may affect negative impacts on mining code repositories. However, if large commits include code modifications, excluding large commits loses such modifications unintentionally. Moreover, such data cleansing assumes that there are no peripheral modifications in small commits. In this paper, we investigate how much peripheral modifications are included in commits in code repositories. As a result, we found that excluding large commits is insufficient to remove hindrances in commits for mining code repositories."
]
} |
1503.05032 | 2950015285 | Sparse matrix-vector multiplication (SpMV) is a fundamental building block for numerous applications. In this paper, we propose CSR5 (Compressed Sparse Row 5), a new storage format, which offers high-throughput SpMV on various platforms including CPUs, GPUs and Xeon Phi. First, the CSR5 format is insensitive to the sparsity structure of the input matrix. Thus the single format can support an SpMV algorithm that is efficient both for regular matrices and for irregular matrices. Furthermore, we show that the overhead of the format conversion from the CSR to the CSR5 can be as low as the cost of a few SpMV operations. We compare the CSR5-based SpMV algorithm with 11 state-of-the-art formats and algorithms on four mainstream processors using 14 regular and 10 irregular matrices as a benchmark suite. For the 14 regular matrices in the suite, we achieve comparable or better performance over the previous work. For the 10 irregular matrices, the CSR5 obtains average performance improvement of 17.6 , 28.5 , 173.0 and 293.3 (up to 213.3 , 153.6 , 405.1 and 943.3 ) over the best existing work on dual-socket Intel CPUs, an nVidia GPU, an AMD GPU and an Intel Xeon Phi, respectively. For real-world applications such as a solver with only tens of iterations, the CSR5 format can be more practical because of its low-overhead for format conversion. The source code of this work is downloadable at this https URL | The recent showed good performance either for regular matrices @cite_42 or for irregular matrices @cite_17 , but not for both. In contrast, the CSR5 can deliver higher throughput both for regular matrices and for irregular matrices. | {
"cite_N": [
"@cite_42",
"@cite_17"
],
"mid": [
"2088866486",
"2087507944"
],
"abstract": [
"The performance of sparse matrix vector multiplication (SpMV) is important to computational scientists. Compressed sparse row (CSR) is the most frequently used format to store sparse matrices. However, CSR-based SpMV on graphics processing units (GPUs) has poor performance due to irregular memory access patterns, load imbalance, and reduced parallelism. This has led researchers to propose new storage formats. Unfortunately, dynamically transforming CSR into these formats has significant runtime and storage overheads. We propose a novel algorithm, CSR-Adaptive, which keeps the CSR format intact and maps well to GPUs. Our implementation addresses the aforementioned challenges by (i) efficiently accessing DRAM by streaming data into the local scratchpad memory and (ii) dynamically assigning different numbers of rows to each parallel GPU compute unit. CSR-Adaptive achieves an average speedup of 14.7 × over existing CSR-based algorithms and 2.3× over clSpMV cocktail, which uses an assortment of matrix formats.",
"Sparse matrix-vector multiplication (SpMV) is a widely used computational kernel. The most commonly used format for a sparse matrix is CSR (Compressed Sparse Row), but a number of other representations have recently been developed that achieve higher SpMV performance. However, the alternative representations typically impose a significant preprocessing overhead. While a high preprocessing overhead can be amortized for applications requiring many iterative invocations of SpMV that use the same matrix, it is not always feasible -- for instance when analyzing large dynamically evolving graphs. This paper presents ACSR, an adaptive SpMV algorithm that uses the standard CSR format but reduces thread divergence by combining rows into groups (bins) which have a similar number of non-zero elements. Further, for rows in bins that span a wide range of non zero counts, dynamic parallelism is leveraged. A significant benefit of ACSR over other proposed SpMV approaches is that it works directly with the standard CSR format, and thus avoids significant preprocessing overheads. A CUDA implementation of ACSR is shown to outperform SpMV implementations in the NVIDIA CUSP and cuSPARSE libraries on a set of sparse matrices representing power-law graphs. We also demonstrate the use of ACSR for the analysis of dynamic graphs, where the improvement over extant approaches is even higher."
]
} |
1503.05032 | 2950015285 | Sparse matrix-vector multiplication (SpMV) is a fundamental building block for numerous applications. In this paper, we propose CSR5 (Compressed Sparse Row 5), a new storage format, which offers high-throughput SpMV on various platforms including CPUs, GPUs and Xeon Phi. First, the CSR5 format is insensitive to the sparsity structure of the input matrix. Thus the single format can support an SpMV algorithm that is efficient both for regular matrices and for irregular matrices. Furthermore, we show that the overhead of the format conversion from the CSR to the CSR5 can be as low as the cost of a few SpMV operations. We compare the CSR5-based SpMV algorithm with 11 state-of-the-art formats and algorithms on four mainstream processors using 14 regular and 10 irregular matrices as a benchmark suite. For the 14 regular matrices in the suite, we achieve comparable or better performance over the previous work. For the 10 irregular matrices, the CSR5 obtains average performance improvement of 17.6 , 28.5 , 173.0 and 293.3 (up to 213.3 , 153.6 , 405.1 and 943.3 ) over the best existing work on dual-socket Intel CPUs, an nVidia GPU, an AMD GPU and an Intel Xeon Phi, respectively. For real-world applications such as a solver with only tens of iterations, the CSR5 format can be more practical because of its low-overhead for format conversion. The source code of this work is downloadable at this https URL | @cite_4 constructed machine learning classifiers for for a given sparse matrix on a target GPU. The CSR5 format described in this work can further simplify such a selection process because it is insensitive to the sparsity structure of the input sparse matrix. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2091883426"
],
"abstract": [
"Sparse matrix-vector multiplication (SpMV) is a core kernel in numerous applications, ranging from physics simulation and large-scale solvers to data analytics. Many GPU implementations of SpMV have been proposed, targeting several sparse representations and aiming at maximizing overall performance. No single sparse matrix representation is uniformly superior, and the best performing representation varies for sparse matrices with different sparsity patterns. In this paper, we study the inter-relation between GPU architecture, sparse matrix representation and the sparse dataset. We perform extensive characterization of pertinent sparsity features of around 700 sparse matrices, and their SpMV performance with a number of sparse representations implemented in the NVIDIA CUSP and cuSPARSE libraries. We then build a decision model using machine learning to automatically select the best representation to use for a given sparse matrix on a given target platform, based on the sparse matrix features. Experimental results on three GPUs demonstrate that the approach is very effective in selecting the best representation."
]
} |
1503.04881 | 2111369166 | The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures. | Recursive neural networks Recursion is a fundamental process in different modalities. In recent years, recursive neural networks (RvNN) have been introduced and demonstrated to achieve state-of-the-art performances on different problems such as semantic analysis in natural language processing and image segmentation @cite_26 @cite_9 . These networks are defined over recursive tree structures---a tree node is a vector computed from its children. In a recursive fashion, the information from the leaf nodes of a tree and its internal nodes are combined in a bottom-up manner through the tree. Derivatives of errors are computed with backpropagation over structures @cite_4 . | {
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_4"
],
"mid": [
"1423339008",
"2251939518",
""
],
"abstract": [
"Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 .",
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
""
]
} |
1503.04881 | 2111369166 | The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures. | In addition, the literature has also included many other efforts of applying feedforward-based neural network over structures, including @cite_4 @cite_7 @cite_8 @cite_10 , amongst others. For instance, Legrand and Collobert leverage neural networks over greedy syntactic parsing @cite_0 . @cite_15 , a deep recursive neural network is proposed . Nevertheless, over the often deep structures, the networks are potentially subject to the vanishing gradient problem, resulting in difficulties in leveraging long-distance dependencies in the structures. In this paper, we propose the S-LSTM model that wires memory blocks in recursive structures. We compare our model with the RvNN models presented in @cite_26 , as we directly replaced the tensor-enhanced composition layer at each tree node with a S-LSTM memory block. We show the advantages of our proposed model in achieving significantly better results. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_15",
"@cite_10"
],
"mid": [
"2251939518",
"",
"2621208586",
"2128731358",
"1546771929",
"2147489358",
"2079587596"
],
"abstract": [
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",
"",
"",
"Temporal sequence learning is one of the most critical components for human intelligence. In this paper, a novel hierarchical structure for complex temporal sequence learning is proposed. Hierarchical organization, a prediction mechanism, and one-shot learning characterize the model. In the lowest level of the hierarchy, we use a modified Hebbian learning mechanism for pattern recognition. Our model employs both active 0 and active 1 sensory inputs. A winner-take-all (WTA) mechanism is used to select active neurons that become the input for sequence learning at higher hierarchical levels. Prediction is an essential element of our temporal sequence learning model. By correct prediction, the machine indicates it knows the current sequence and does not require additional learning. When the prediction is incorrect, one-shot learning is executed and the machine learns the new input sequence as soon as the sequence is completed. A four-level hierarchical structure that isolates letters, words, sentences, and strophes is used in this paper to illustrate the model",
"The goal of the scene labeling task is to assign a class label to each pixel in an image. To ensure a good visual coherence and a high class accuracy, it is essential for a model to capture long range (pixel) label dependencies in images. In a feed-forward architecture, this can be achieved simply by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach that consists of a recurrent convolutional neural network which allows us to consider a large input context while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation technique nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"Recursive neural networks comprise a class of architecture that can operate on structured input. They have been previously successfully applied to model com-positionality in natural language using parse-tree-based structural representations. Even though these architectures are deep in structure, they lack the capacity for hierarchical representation that exists in conventional deep feed-forward networks as well as in recently investigated deep recurrent neural networks. In this work we introduce a new architecture — a deep recursive neural network (deep RNN) — constructed by stacking multiple recursive layers. We evaluate the proposed model on the task of fine-grained sentiment classification. Our results show that deep RNNs outperform associated shallow counterparts that employ the same number of parameters. Furthermore, our approach outperforms previous baselines on the sentiment analysis task, including a multiplicative RNN variant as well as the recently introduced paragraph vectors, achieving new state-of-the-art results. We provide exploratory analyses of the effect of multiple layers and show that they capture different aspects of compositionality in language.",
"Abstract Self-organization constitutes an important paradigm in machine learning with successful applications e.g. in data- and web-mining. Most approaches, however, have been proposed for processing data contained in a fixed and finite dimensional vector space. In this article, we will focus on extensions to more general data structures like sequences and tree structures. Various modifications of the standard self-organizing map (SOM) to sequences or tree structures have been proposed in the literature some of which are the temporal Kohonen map, the recursive SOM, and SOM for structured data. These methods enhance the standard SOM by utilizing recursive connections. We define a general recursive dynamic in this article which provides recursive processing of complex data structures by recursive computation of internal representations for the given context. The above mentioned mechanisms of SOMs for structures are special cases of the proposed general dynamic. Furthermore, the dynamic covers the supervised case of recurrent and recursive networks. The general framework offers an uniform notation for training mechanisms such as Hebbian learning. Moreover, the transfer of computational alternatives such as vector quantization or the neural gas algorithm to structure processing networks can be easily achieved. One can formulate general cost functions corresponding to vector quantization, neural gas, and a modification of SOM. The cost functions can be compared to Hebbian learning which can be interpreted as an approximation of a stochastic gradient descent. For comparison, we derive the exact gradients for general cost functions."
]
} |
1503.04768 | 2952235030 | In many scenarios, networks emerge endogenously as cognitive agents establish links in order to exchange information. Network formation has been widely studied in economics, but only on the basis of simplistic models that assume that the value of each additional piece of information is constant. In this paper we present a first model and associated analysis for network formation under the much more realistic assumption that the value of each additional piece of information depends on the type of that piece of information and on the information already possessed: information may be complementary or redundant. We model the formation of a network as a non-cooperative game in which the actions are the formation of links and the benefit of forming a link is the value of the information exchanged minus the cost of forming the link. We characterize the topologies of the networks emerging at a Nash equilibrium (NE) of this game and compare the efficiency of equilibrium networks with the efficiency of centrally designed networks. To quantify the impact of information redundancy and linking cost on social information loss, we provide estimates for the Price of Anarchy (PoA); to quantify the impact on individual information loss we introduce and provide estimates for a measure we call Maximum Information Loss (MIL). Finally, we consider the setting in which agents are not endowed with information, but must produce it. We show that the validity of the well-known "law of the few" depends on how information aggregates; in particular, the "law of the few" fails when information displays complementarities. | Strategic network formation was first studied in the economics literature. Some of this literature @cite_25 - @cite_5 asks which networks are stable (according to some criteria) and hence more likely to persist and be observed. A (smaller) literature asks which networks emerge as the result of some specific dynamic process @cite_21 @cite_0 . In all these works, simplistic benefit functions are used: the value of each additional good" exchanged is constant @cite_25 - @cite_14 . However, in realistic settings, information possessed by different agents can be redundant or complementary. For instance, secondary users in a multi-band cognitive radio system may be interested in gathering information about spectrum occupancy for bands that they do not sense by communicating with other users who do sense these bands @cite_11 ; sensors deployed over a correlated random field @cite_23 - @cite_17 may be interested in gathering complementary measurements about some set of physical processes of interest; and mobile users who exchange offloaded traffic of SNSs and context-aware applications are only interested in gathering non-redundant traffic and data updates. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_17",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_25",
"@cite_11"
],
"mid": [
"2149210625",
"2015358738",
"2140702062",
"2081675399",
"2171376656",
"2103720435",
"2048723886",
"2158604761"
],
"abstract": [
"Peer-to-peer (P2P) networks can be easily deployed to distribute user-generated content at a low cost, but the free-rider problem hinders the efficient utilization of P2P networks. Using game theory, we investigate incentive schemes to overcome the free-rider problem in content production and sharing. We build a basic model and obtain two benchmark outcomes: 1) the non-cooperative outcome without any incentive scheme and 2) the cooperative outcome. We then propose and examine three incentive schemes based on pricing, reciprocation, and intervention. We also study a brute-force scheme that enforces full sharing of produced content. We find that 1) cooperative peers share all produced content while non-cooperative peers do not share at all without an incentive scheme; 2) by utilizing the P2P network efficiently, the cooperative outcome achieves higher social welfare than the non-cooperative outcome does; 3) a cooperative outcome can be achieved among non-cooperative peers by introducing an incentive scheme based on pricing, reciprocation, or intervention; and 4) enforced full sharing has ambiguous welfare effects on peers. In addition to describing the solutions of different formulations, we discuss enforcement and informational requirements to implement each solution, aiming to offer a guideline for protocol design for P2P networks.",
"We analyze networks that feature reputational learning: how links are initially formed by agents under incomplete information, how agents learn about their neighbors through these links, and how links may ultimately become broken. We show that the type of information agents have access to, and the speed at which agents learn about each other, can have tremendous repercussions for the network evolution and the overall network social welfare. Specifically, faster learning can often be harmful for networks as a whole if agents are myopic, because agents fail to fully internalize the benefits of experimentation and break off links too quickly. As a result, preventing two agents from linking with each other can be socially beneficial, even if the two agents are initially believed to be of high quality. This is due to the fact that having fewer connections slows the rate of learning about these agents, which can be socially beneficial. Another method of solving the informational problem is to impose costs for breaking links, in order to incentivize agents to experiment more carefully.",
"In this paper, we propose a routing algorithm called minimum fusion Steiner tree (MFST) for energy efficient data gathering with aggregation (fusion) in wireless sensor networks. Different from existing schemes, MFST not only optimizes over the data transmission cost, but also incorporates the cost for data fusion, which can be significant for emerging sensor networks with vectorial data and or security requirements. By employing a randomized algorithm that allows fusion points to be chosen according to the nodes' data amounts, MFST achieves an approximation ratio of 5 4log(k + 1), where k denotes the number of source nodes, to the optimal solution for extremely general system setups, provided that fusion cost and data aggregation are nondecreasing against the total input data. Consequently, in contrast to algorithms that only excel in full or nonaggregation scenarios without considering fusion cost, MFST can thrive in a wide range of applications",
"How do networks form and what is their ultimate topology? Most of the literature that addresses these questions assumes complete information: agents know in advance the value of linking even with agents they have never met and with whom they have had no previous interaction (direct or indirect). This paper addresses the same questions under the much more natural assumption of incomplete information: agents do not know in advance—but must learn—the value of linking. We show that incomplete information has profound implications for the formation process and the ultimate topology. Under complete information, the network topologies that form and are stable typically consist of agents of relatively high value only. Under incomplete information, a much wider collection of network topologies can emerge and be stable. Moreover, even with the same topology, the locations of agents can be very different: An agent can achieve a central position purely as the result of chance rather than as the result of merit. All of this can occur even in settings where agents eventually learn everything so that information, although initially incomplete, eventually becomes complete. The ultimate network topology depends significantly on the formation history, which is natural and true in practice, and incomplete information makes this phenomenon more prevalent.",
"Consider the following network communication setup, originating in a sensor networking application we refer to as the \"sensor reachback\" problem. We have a directed graph G=(V,E), where V= v sub 0 v sub 1 ...v sub n and E spl sube V spl times V. If (v sub i ,v sub j ) spl isin E, then node i can send messages to node j over a discrete memoryless channel (DMC) (X sub ij ,p sub ij (y|x),Y sub ij ), of capacity C sub ij . The channels are independent. Each node v sub i gets to observe a source of information U sub i (i=0...M), with joint distribution p(U sub 0 U sub 1 ...U sub M ). Our goal is to solve an incast problem in G: nodes exchange messages with their neighbors, and after a finite number of communication rounds, one of the M+1 nodes (v sub 0 by convention) must have received enough information to reproduce the entire field of observations (U sub 0 U sub 1 ...U sub M ), with arbitrarily small probability of error. In this paper, we prove that such perfect reconstruction is possible if and only if H(U sub s | U sub S(c) ) < spl Sigma sub i spl isin S,j spl isin S(c) for all S spl sube 0...M ,S spl ne O,0 spl isin S(c). Our main finding is that in this setup, a general source channel separation theorem holds, and that Shannon information behaves as a classical network flow, identical in nature to the flow of water in pipes. At first glance, it might seem surprising that separation holds in a fairly general network situation like the one we study. A closer look, however, reveals that the reason for this is that our model allows only for independent point-to-point channels between pairs of nodes, and not multiple-access and or broadcast channels, for which separation is well known not to hold. This \"information as flow\" view provides an algorithmic interpretation for our results, among which perhaps the most important one is the optimality of implementing codes using a layered protocol stack.",
"This paper presents the first study of the endogenous formation of networks by strategic, self-interested agents who benefit from producing and disseminating information. This work departs from previous works on network formation (especially in the economics literature) which assume that agents benefit only by acquiring information produced by other agents. The strategic production and dissemination of information have striking consequences. We show first that the network structure that emerges (in equilibrium) typically displays a core-periphery structure, with the few agents at the core playing the role of eeconnectorsee, creating and maintaining links to the agents at the periphery. We then determine conditions under which the networks that emerge are minimally connected and have short network diameters (properties that are important for efficiency). Finally, we show that the number of agents who produce information and the total amount of information produced in the network grow at the same rate as the agent population; this is in stark contrast to the \"law of the few\" that had been established in previous works which do not consider information dissemination.",
"Recently, localization has become an indispensable technique for wireless applications. In view of the limitation of global position system (GPS) in certain environments, alternative approaches are in demand. In this paper, we consider a cooperative localization approach named sum-product algorithm over a wireless network (SPAWN). Although SPAWN theoretically facilitates cooperative localization, it has several practical limitations. Specifically, SPAWN results in high computational complexity and increased network traffic. The main complexity of SPAWN lies in the selection of agents anchors involved in the cooperative localization. To this end, we formulate the agent anchor selection problem into a network formation game. Together with a practical limit on the number of agents anchors used for cooperative localization, our proposed approach can markedly reduce the computational complexity and the resultant network traffic. Simulations show that these advantages come with a slight degradation in the localization mean squared error (MSE) performance.",
"Collaborative spectrum sensing among secondary users (SUs) in cognitive networks is shown to yield a significant performance improvement. However, there exists an inherent trade off between the gains in terms of probability of detection of the primary user (PU) and the costs in terms of false alarm probability. In this paper, we study the impact of this trade off on the topology and the dynamics of a network of SUs seeking to reduce the interference on the PU through collaborative sensing. Moreover, while existing literature mainly focused on centralized solutions for collaborative sensing, we propose distributed collaboration strategies through game theory. We model the problem as a non-transferable coalitional game, and propose a distributed algorithm for coalition formation through simple merge and split rules. Through the proposed algorithm, SUs can autonomously collaborate and self-organize into disjoint independent coalitions, while maximizing their detection probability taking into account the cooperation costs (in terms of false alarm). We study the stability of the resulting network structure, and show that a maximum number of SUs per formed coalition exists for the proposed utility model. Simulation results show that the proposed algorithm allows a reduction of up to 86.6 of the average missing probability per SU (probability of missing the detection of the PU) relative to the non-cooperative case, while maintaining a certain false alarm level. In addition, through simulations, we compare the performance of the proposed distributed solution with respect to an optimal centralized solution that minimizes the average missing probability per SU. Finally, the results also show how the proposed algorithm autonomously adapts the network topology to environmental changes such as mobility."
]
} |
1503.04996 | 1741712551 | Random Forest (RF) is an ensemble supervised machine learning technique that was developed by Breiman over a decade ago. Compared with other ensemble techniques, it has proved its accuracy and superiority. Many researchers, however, believe that there is still room for enhancing and improving its performance accuracy. This explains why, over the past decade, there have been many extensions of RF where each extension employed a variety of techniques and strategies to improve certain aspect(s) of RF. Since it has been proven empiricallthat ensembles tend to yield better results when there is a significant diversity among the constituent models, the objective of this paper is twofold. First, it investigates how data clustering (a well known diversity technique) can be applied to identify groups of similar decision trees in an RF in order to eliminate redundant trees by selecting a representative from each group (cluster). Second, these likely diverse representatives are then used to produce an extension of RF termed CLUB-DRF that is much smaller in size than RF, and yet performs at least as good as RF, and mostly exhibits higher performance in terms of accuracy. The latter refers to a known technique called ensemble pruning. Experimental results on 15 real datasets from the UCI repository prove the superiority of our proposed extension over the traditional RF. Most of our experiments achieved at least 95 or above pruning level while retaining or outperforming the RF accuracy. | Because of the vital role diversity plays on the performance of ensembles, it had received a lot of attention from the research community. G. @cite_20 summarized the work done to date in this domain from two main perspectives. The first is a review of the various attempts that were made to provide a formal foundation of diversity. The second, which is more relevant to this paper, is a survey of the various techniques to produce diverse ensembles. For the latter, two types of diversity methods were identified: implicit and explicit. While implicit methods tend to use randomness to generate diverse trajectories in the hypothesis space, explicit methods, on the other hand, choose different paths in the space deterministically. In light of these definitions, bagging and boosting in the previous section are classified as implicit and explicit respectively. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2167055186"
],
"abstract": [
"Ensemble approaches to classication and regression have attracted a great deal of interest in recent years. These methods can be shown both theoretically and empirically to outperform single predictors on a wide range of tasks. One of the elements required for accurate prediction when using an ensemble is recognised to be error \". However, the exact meaning of this concept is not clear from the literature, particularly for classication tasks. In this paper we rst review the varied attempts to provide a formal explanation of error diversity, including several heuristic and qualitative explanations in the literature. For completeness of discussion we include not only the classication literature but also some excerpts of the rather more mature regression literature, which we believe can still provide some insights. We proceed to survey the various techniques used for creating diverse ensembles, and categorise them, forming a preliminary taxonomy of diversity creation methods. As part of this taxonomy we introduce the idea of implicit and explicit diversity creation methods, and three dimensions along which these may be applied. Finally we propose some new directions that may prove fruitful in understanding classication error diversity."
]
} |
1503.04996 | 1741712551 | Random Forest (RF) is an ensemble supervised machine learning technique that was developed by Breiman over a decade ago. Compared with other ensemble techniques, it has proved its accuracy and superiority. Many researchers, however, believe that there is still room for enhancing and improving its performance accuracy. This explains why, over the past decade, there have been many extensions of RF where each extension employed a variety of techniques and strategies to improve certain aspect(s) of RF. Since it has been proven empiricallthat ensembles tend to yield better results when there is a significant diversity among the constituent models, the objective of this paper is twofold. First, it investigates how data clustering (a well known diversity technique) can be applied to identify groups of similar decision trees in an RF in order to eliminate redundant trees by selecting a representative from each group (cluster). Second, these likely diverse representatives are then used to produce an extension of RF termed CLUB-DRF that is much smaller in size than RF, and yet performs at least as good as RF, and mostly exhibits higher performance in terms of accuracy. The latter refers to a known technique called ensemble pruning. Experimental results on 15 real datasets from the UCI repository prove the superiority of our proposed extension over the traditional RF. Most of our experiments achieved at least 95 or above pruning level while retaining or outperforming the RF accuracy. | G. @cite_20 also categorized ensemble diversity techniques into three categories: starting point in hypothesis space, set of accessible hypotheses, and manipulation of training data. Methods in the first category use different starting points in the hypothesis space, therefore, influencing the convergence place within the space. Because of their poor performance of achieving diversity, such methods are used by many authors as a default benchmark for their own methods @cite_35 . Methods in the second category vary the set of hypotheses that are available and accessible by the ensemble. For different ensembles, these methods vary either the training data used or the architecture employed. In the third category, the methods alter the way space is traversed. Occupying any point in the search space, gives a particular hypothesis. The type of the ensemble obtained will be determined by how the space of the possible hypotheses is traversed. | {
"cite_N": [
"@cite_35",
"@cite_20"
],
"mid": [
"2100805904",
"2167055186"
],
"abstract": [
"An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier - especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble's performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees.",
"Ensemble approaches to classication and regression have attracted a great deal of interest in recent years. These methods can be shown both theoretically and empirically to outperform single predictors on a wide range of tasks. One of the elements required for accurate prediction when using an ensemble is recognised to be error \". However, the exact meaning of this concept is not clear from the literature, particularly for classication tasks. In this paper we rst review the varied attempts to provide a formal explanation of error diversity, including several heuristic and qualitative explanations in the literature. For completeness of discussion we include not only the classication literature but also some excerpts of the rather more mature regression literature, which we believe can still provide some insights. We proceed to survey the various techniques used for creating diverse ensembles, and categorise them, forming a preliminary taxonomy of diversity creation methods. As part of this taxonomy we introduce the idea of implicit and explicit diversity creation methods, and three dimensions along which these may be applied. Finally we propose some new directions that may prove fruitful in understanding classication error diversity."
]
} |
1503.04996 | 1741712551 | Random Forest (RF) is an ensemble supervised machine learning technique that was developed by Breiman over a decade ago. Compared with other ensemble techniques, it has proved its accuracy and superiority. Many researchers, however, believe that there is still room for enhancing and improving its performance accuracy. This explains why, over the past decade, there have been many extensions of RF where each extension employed a variety of techniques and strategies to improve certain aspect(s) of RF. Since it has been proven empiricallthat ensembles tend to yield better results when there is a significant diversity among the constituent models, the objective of this paper is twofold. First, it investigates how data clustering (a well known diversity technique) can be applied to identify groups of similar decision trees in an RF in order to eliminate redundant trees by selecting a representative from each group (cluster). Second, these likely diverse representatives are then used to produce an extension of RF termed CLUB-DRF that is much smaller in size than RF, and yet performs at least as good as RF, and mostly exhibits higher performance in terms of accuracy. The latter refers to a known technique called ensemble pruning. Experimental results on 15 real datasets from the UCI repository prove the superiority of our proposed extension over the traditional RF. Most of our experiments achieved at least 95 or above pruning level while retaining or outperforming the RF accuracy. | Regardless of the diversity creation technique used, diversity measures were developed to measure the diversity of a certain technique or perhaps to compare the diversity of two techniques. @cite_48 presented a theoretical analysis on six existing diversity measures: disagreement measure @cite_32 , double fault measure @cite_43 , KW variance @cite_5 , inter-rater agreement @cite_4 , generalized diversity @cite_10 , and measure of difficulty @cite_4 . The goal was not only to show the underlying relationships between them, but also to relate them to the concept of margin, which is one of the contributing factors to the success of ensemble learning algorithms. | {
"cite_N": [
"@cite_4",
"@cite_48",
"@cite_32",
"@cite_43",
"@cite_5",
"@cite_10"
],
"mid": [
"2150290224",
"2122892819",
"1509256642",
"2058307353",
"1516193414",
"2049861803"
],
"abstract": [
"Preface.Preface to the Second Edition.Preface to the First Edition.1. An Introduction to Applied Probability.2. Statistical Inference for a Single Proportion.3. Assessing Significance in a Fourfold Table.4. Determining Sample Sizes Needed to Detect a Difference Between Two Proportions.5. How to Randomize.6. Comparative Studies: Cross-Sectional, Naturalistic, or Multinomial Sampling.7. Comparative Studies: Prospective and Retrospective Sampling.8. Randomized Controlled Trials.9. The Comparison of Proportions from Several Independent Samples.10. Combining Evidence from Fourfold Tables.11. Logistic Regression.12. Poisson Regression.13. Analysis of Data from Matched Samples.14. Regression Models for Matched Samples.15. Analysis of Correlated Binary Data.16. Missing Data.17. Misclassification Errors: Effects, Control, and Adjustment.18. The Measurement of Interrater Agreement.19. The Standardization of Rates.Appendix A. Numerical Tables.Appendix B. The Basic Theory of Maximum Likelihood Estimation.Appendix C. Answers to Selected Problems.Author Index.Subject Index.",
"Diversity among the base classifiers is deemed to be important when constructing a classifier ensemble. Numerous algorithms have been proposed to construct a good classifier ensemble by seeking both the accuracy of the base classifiers and the diversity among them. However, there is no generally accepted definition of diversity, and measuring the diversity explicitly is very difficult. Although researchers have designed several experimental studies to compare different diversity measures, usually confusing results were observed. In this paper, we present a theoretical analysis on six existing diversity measures (namely disagreement measure, double fault measure, KW variance, inter-rater agreement, generalized diversity and measure of difficulty), show underlying relationships between them, and relate them to the concept of margin, which is more explicitly related to the success of ensemble learning algorithms. We illustrate why confusing experimental results were observed and show that the discussed diversity measures are naturally ineffective. Our analysis provides a deeper understanding of the concept of diversity, and hence can help design better ensemble learning algorithms.",
"",
"Abstract In the field of pattern recognition, the combination of an ensemble of neural networks has been proposed as an approach to the development of high performance image classification systems. However, previous work clearly showed that such image classification systems are effective only if the neural networks forming them make different errors. Therefore, the fundamental need for methods aimed to design ensembles of ‘error-independent’ networks is currently acknowledged. In this paper, an approach to the automatic design of effective neural network ensembles is proposed. Given an initial large set of neural networks, our approach is aimed to select the subset formed by the most error-independent nets. Reported results on the classification of multisensor remote-sensing images show that this approach allows one to design effective neural network ensembles.",
"",
"Abstract The topic of this paper is the exploitation of diversity to enhance computer system reliability. It is well established that a diverse system composed of multiple alternative versions is more reliable than any single version alone, and this knowledge has occasionally been exploited in safety-critical applications. However, it is not clear what this property is, nor how the available diversity in a collection of versions is best exploited. We develop, define, illustrate and assess diversity measures, voting strategies for diversity exploitation, and interactions between the two. We take the view that a proper understanding of such issues is required if multiversion software engineering is to be elevated from the current “try it and see” procedure to a systematic technology. In addition, we introduce inductive programming techniques, particularly neural computing, as a cost-effective route to the practical use of multiversion systems outside the demanding requirements of safety-critical systems, i.e. in general software engineering."
]
} |
1503.04055 | 2950198459 | Spreadsheets are widely used within companies and often form the basis for business decisions. Numerous cases are known where incorrect information in spreadsheets has lead to incorrect decisions. Such cases underline the relevance of research on the professional use of spreadsheets. Recently a new dataset became available for research, containing over 15.000 business spreadsheets that were extracted from the Enron E-mail Archive. With this dataset, we 1) aim to obtain a thorough understanding of the characteristics of spreadsheets used within companies, and 2) compare the characteristics of the Enron spreadsheets with the EUSES corpus which is the existing state of the art set of spreadsheets that is frequently used in spreadsheet studies. Our analysis shows that 1) the majority of spreadsheets are not large in terms of worksheets and formulas, do not have a high degree of coupling, and their formulas are relatively simple; 2) the spreadsheets from the EUSES corpus are, with respect to the measured characteristics, quite similar to the Enron spreadsheets. | Most related to our efforts is of course the work of Hermans and Murphy-Hill @cite_10 which initially presented the Enron spreadsheet corpus. The authors performed a preliminary analysis of some basic characteristics (like number of worksheets, cells, and formulas) of these spreadsheets. While Hermans and Murphy-Hill present a first overview of the spreadsheets, we, in this paper, dive deeper and added additional metrics to measure the degree of coupling and gain more insight in the actual use of functions. Also, for every metric we compared the Enron spreadsheets with the EUSES corpus. By using the Wilcoxon-Mann-Whitney test and calculating the Cliff's delta for the different metrics we answer the question of representativeness of the EUSES corpus. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1579695396"
],
"abstract": [
"Spreadsheets are used extensively in business processes around the world and as such, a topic of research interest. Over the past few years, many spreadsheet studies have been performed on the EUSES spreadsheet corpus. While this corpus has served the spreadsheet community well, the spreadsheets it contains are mainly gathered with search engines and as such do not represent spreadsheets used in companies. This paper presents a new dataset, extracted for the Enron Email Archive, containing over 15,000 spreadsheets used within the Enron Corporation. In addition to the spreadsheets, we also present an analysis of the associated emails, where we look into spreadsheet specific email behavior. Our analysis shows that 1) 24 of Enron spreadsheets with at least one formula contain an Excel error, 2) there is little diversity in the functions used in spreadsheets: 76 of spreadsheets in the presented corpus only use the same 15 functions and, 3) the spreadsheets are substantially more smelly than the EUSES corpus, especially in terms of long calculation chains. Regarding the emails, we observe that spreadsheets 1) are a frequent topic of email conversation with 10 of emails either sending or referring spreadsheets and 2) the emails are frequently discussing errors in and updates to spreadsheets."
]
} |
1503.04055 | 2950198459 | Spreadsheets are widely used within companies and often form the basis for business decisions. Numerous cases are known where incorrect information in spreadsheets has lead to incorrect decisions. Such cases underline the relevance of research on the professional use of spreadsheets. Recently a new dataset became available for research, containing over 15.000 business spreadsheets that were extracted from the Enron E-mail Archive. With this dataset, we 1) aim to obtain a thorough understanding of the characteristics of spreadsheets used within companies, and 2) compare the characteristics of the Enron spreadsheets with the EUSES corpus which is the existing state of the art set of spreadsheets that is frequently used in spreadsheet studies. Our analysis shows that 1) the majority of spreadsheets are not large in terms of worksheets and formulas, do not have a high degree of coupling, and their formulas are relatively simple; 2) the spreadsheets from the EUSES corpus are, with respect to the measured characteristics, quite similar to the Enron spreadsheets. | Secondly, there is the EUSES corpus, which was introduced by Fisher and Rothermel in 2005 @cite_7 . Besides EUSES, there are a few other smaller corpora @cite_1 @cite_8 . Unfortunately, none of these corpora were publicly available to include in the analyses of this paper. | {
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_8"
],
"mid": [
"2112338634",
"2135473121",
"1659044373"
],
"abstract": [
"Although spreadsheet programs are used for small \"scratchpad\" applications, they are also used to develop many large applications. In recent years, we have learned a good deal about the errors that people make when they develop spreadsheets. In general, errors seem to occur in a few percent of all cells, meaning that for large spreadsheets, the issue is how many errors there are, not whether an error exists. These error rates, although troubling, are in line with those in programming and other human cognitive domains. In programming, we have learned to follow strict development disciplines to eliminate most errors. Surveys of spreadsheet developers indicate that spreadsheet creation, in contrast, is informal, and few organizations have comprehensive policies for spreadsheet development. Although prescriptive articles have focused on such",
"In recent years several tools and methodologies have been developed to improve the dependability of spreadsheets. However, there has been little evaluation of these dependability devices on spreadsheets in actual use by end users. To assist in the process of evaluating these methodologies, we have assembled a corpus of spreadsheets from a variety of sources. We have ensured that these spreadsheets are suitable for evaluating dependability devices in Microsoft Excel (the most commonly used commercial spreadsheet environment) and have measured a variety of feature of these spreadsheets to aid researchers in selecting subsets of the corpus appropriate to their needs.",
"Legacy spreadsheets are both, an asset, and an enduring problem concerning spreadsheets in business. To make spreadsheets stay alive and remain correct, comprehension of a given spreadsheet is highly important. Visualization techniques should ease the complex and mindblowing challenges of finding structures in a huge set of spreadsheet cells for building an adequate mental model of spreadsheet programs. Since spreadsheet programs are as diverse as the purpose they are serving and as inhomogeneous as their programmers, to find an appropriate representation or visualization technique for every spreadsheet program seems futile. We thus propose different visualization and representation methods that may ease spreadsheet comprehension but should not be applied with all kind of spreadsheet programs. Therefore, this paper proposes to use (complexity) measures as indicators for proper visualization."
]
} |
1503.04251 | 2261127636 | In this paper, we introduce a sophisticated path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions to study their impact on the performance of dense small cell networks (SCNs). Analytical results are obtained for the coverage probability and the area spectral efficiency (ASE), assuming both a general path loss model and a special case with a linear LoS probability function. The performance impact of LoS and NLoS transmissions in dense SCNs in terms of the coverage probability and the ASE is significant, both quantitatively and qualitatively, compared with the previous work that does not differentiate LoS and NLoS transmissions. Our analysis demonstrates that the network coverage probability first increases with the increase of the base station (BS) density, and then decreases as the SCN becomes denser. This decrease further makes the ASE suffer from a slow growth or even a decrease with network densification. The ASE will grow almost linearly as the BS density goes ultra dense. For practical regime of the BS density, the performance results derived from our analysis are distinctively different from previous results, and thus shed new insights on the design and deployment of future dense SCNs. | In stochastic geometry, BS positions are typically modeled as a Homogeneous Poisson Point Process (HPPP) on the plane, and closed-form expressions of coverage probability can be found for some scenarios in single-tier cellular networks @cite_12 and multi-tier cellular networks @cite_15 @cite_9 . A general treatment of stochastic geometry can be found in @cite_5 . The major conclusion in [ 4-7 ] is that neither the number of cells nor the number of cell tiers changes the coverage probability in interference-limited fully-loaded wireless networks. However, these works consider a simplistic path loss model that does not differentiate LoS and NLoS transmissions. In contrast, in this paper, we consider a more complete path loss model incorporating both LoS and NLoS transmissions to study their impact on the performance of dense SCNs. | {
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_12",
"@cite_5"
],
"mid": [
"2005108639",
"2149170915",
"2150166076",
"631335369"
],
"abstract": [
"Pushing data traffic from cellular to WiFi is an example of inter radio access technology (RAT) offloading. While this clearly alleviates congestion on the over-loaded cellular network, the ultimate potential of such offloading and its effect on overall system performance is not well understood. To address this, we develop a general and tractable model that consists of M different RATs, each deploying up to K different tiers of access points (APs), where each tier differs in transmit power, path loss exponent, deployment density and bandwidth. Each class of APs is modeled as an independent Poisson point process (PPP), with mobile user locations modeled as another independent PPP, all channels further consisting of i.i.d. Rayleigh fading. The distribution of rate over the entire network is then derived for a weighted association strategy, where such weights can be tuned to optimize a particular objective. We show that the optimum fraction of traffic offloaded to maximize SINR coverage is not in general the same as the one that maximizes rate coverage, defined as the fraction of users achieving a given rate.",
"Cellular networks are in a major transition from a carefully planned set of large tower-mounted base-stations (BSs) to an irregular deployment of heterogeneous infrastructure elements that often additionally includes micro, pico, and femtocells, as well as distributed antennas. In this paper, we develop a tractable, flexible, and accurate model for a downlink heterogeneous cellular network (HCN) consisting of K tiers of randomly located BSs, where each tier may differ in terms of average transmit power, supported data rate and BS density. Assuming a mobile user connects to the strongest candidate BS, the resulting Signal-to-Interference-plus-Noise-Ratio (SINR) is greater than 1 when in coverage, Rayleigh fading, we derive an expression for the probability of coverage (equivalently outage) over the entire network under both open and closed access, which assumes a strikingly simple closed-form in the high SINR regime and is accurate down to -4 dB even under weaker assumptions. For external validation, we compare against an actual LTE network (for tier 1) with the other K-1 tiers being modeled as independent Poisson Point Processes. In this case as well, our model is accurate to within 1-2 dB. We also derive the average rate achieved by a randomly located mobile and the average load on each tier of BSs. One interesting observation for interference-limited open access networks is that at a given , adding more tiers and or BSs neither increases nor decreases the probability of coverage or outage when all the tiers have the same target-SINR.",
"Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.",
"Covering point process theory, random geometric graphs and coverage processes, this rigorous introduction to stochastic geometry will enable you to obtain powerful, general estimates and bounds of wireless network performance and make good design choices for future wireless architectures and protocols that efficiently manage interference effects. Practical engineering applications are integrated with mathematical theory, with an understanding of probability the only prerequisite. At the same time, stochastic geometry is connected to percolation theory and the theory of random geometric graphs and accompanied by a brief introduction to the R statistical computing language. Combining theory and hands-on analytical techniques with practical examples and exercises, this is a comprehensive guide to the spatial stochastic models essential for modelling and analysis of wireless network performance."
]
} |
1503.04251 | 2261127636 | In this paper, we introduce a sophisticated path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions to study their impact on the performance of dense small cell networks (SCNs). Analytical results are obtained for the coverage probability and the area spectral efficiency (ASE), assuming both a general path loss model and a special case with a linear LoS probability function. The performance impact of LoS and NLoS transmissions in dense SCNs in terms of the coverage probability and the ASE is significant, both quantitatively and qualitatively, compared with the previous work that does not differentiate LoS and NLoS transmissions. Our analysis demonstrates that the network coverage probability first increases with the increase of the base station (BS) density, and then decreases as the SCN becomes denser. This decrease further makes the ASE suffer from a slow growth or even a decrease with network densification. The ASE will grow almost linearly as the BS density goes ultra dense. For practical regime of the BS density, the performance results derived from our analysis are distinctively different from previous results, and thus shed new insights on the design and deployment of future dense SCNs. | Notions that are similar to LoS and NLoS transmissions have been previously explored in the building blockage study in @cite_1 and the indoor communication network in @cite_10 . @cite_1 , the authors proposed a microscopic performance analysis framework to model the random blockage effect of buildings, and analyze its impact on cellular network performance. Further refinement and verification of the proposed model in @cite_1 is needed, especially to consider reflections which are an important contributor to coverage in urban areas. @cite_10 , the authors present an analytical study of indoor propagation through walls, and showed that the throughput does not scale linearly with the density of small cells. Different from @cite_10 , in this paper, we investigate outdoor dense SCNs. | {
"cite_N": [
"@cite_10",
"@cite_1"
],
"mid": [
"2171856200",
"2073252511"
],
"abstract": [
"Cell splitting frequency reuse is a fundamental characteristic of cellular networks. In macrocellular networks, where pathloss is governed by the distance-to-a-power law, the SINR distribution is invariant to scale, i.e. the spectral efficiency per base (bps Hz base) remains constant, and hence the capacity increases linearly with the density of cells. We show for indoor networks, where signals suffer exponential loss in addition to the inverse square distance law, the median SINR decreases with the density of cells. This results in the capacity being proportional to the square-root of the improvement in the density of cells.",
"Large-scale blockages such as buildings affect the performance of urban cellular networks, especially at higher frequencies. Unfortunately, such blockage effects are either neglected or characterized by oversimplified models in the analysis of cellular networks. Leveraging concepts from random shape theory, this paper proposes a mathematical framework to model random blockages and analyze their impact on cellular network performance. Random buildings are modeled as a process of rectangles with random sizes and orientations whose centers form a Poisson point process on the plane. The distribution of the number of blockages in a link is proven to be a Poisson random variable with parameter dependent on the length of the link. Our analysis shows that the probability that a link is not intersected by any blockages decays exponentially with the link length. A path loss model that incorporates the blockage effects is also proposed, which matches experimental trends observed in prior work. The model is applied to analyze the performance of cellular networks in urban areas with the presence of buildings, in terms of connectivity, coverage probability, and average rate. Our results show that the base station density should scale superlinearly with the blockage density to maintain the network connectivity. Our analyses also show that while buildings may block the desired signal, they may still have a positive impact on the SIR coverage probability and achievable rate since they can block significantly more interference."
]
} |
1503.04251 | 2261127636 | In this paper, we introduce a sophisticated path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions to study their impact on the performance of dense small cell networks (SCNs). Analytical results are obtained for the coverage probability and the area spectral efficiency (ASE), assuming both a general path loss model and a special case with a linear LoS probability function. The performance impact of LoS and NLoS transmissions in dense SCNs in terms of the coverage probability and the ASE is significant, both quantitatively and qualitatively, compared with the previous work that does not differentiate LoS and NLoS transmissions. Our analysis demonstrates that the network coverage probability first increases with the increase of the base station (BS) density, and then decreases as the SCN becomes denser. This decrease further makes the ASE suffer from a slow growth or even a decrease with network densification. The ASE will grow almost linearly as the BS density goes ultra dense. For practical regime of the BS density, the performance results derived from our analysis are distinctively different from previous results, and thus shed new insights on the design and deployment of future dense SCNs. | @cite_14 , the authors assumed a multi-slope piece-wise path loss function. Specifically, assuming that the distance between a BS and a UE is denoted by @math in km, then the path loss associated with distance @math can be formulated as & -0.3cm , where the path loss function @math is segmented into @math pieces, with each piece and each segment break point denoted by @math and @math , respectively. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2134686390"
],
"abstract": [
"Existing cellular network analyses, and even simulations, typically use the standard path loss model where received power decays like @math over a distance @math . This standard path loss model is quite idealized, and in most scenarios the path loss exponent @math is itself a function of @math , typically an increasing one. Enforcing a single path loss exponent can lead to orders of magnitude differences in average received and interference powers versus the true values. In this paper, we study multi-slope path loss models, where different distance ranges are subject to different path loss exponents. We focus on the dual-slope path loss function, which is a piece-wise power law and continuous and accurately approximates many practical scenarios. We derive the distributions of SIR, SNR, and finally SINR before finding the potential throughput scaling, which provides insight on the observed cell-splitting rate gain. The exact mathematical results show that the SIR monotonically decreases with network density, while the converse is true for SNR, and thus the network coverage probability in terms of SINR is maximized at some finite density. With ultra-densification (network density goes to infinity), there exists a phase transition in the near-field path loss exponent @math : if @math unbounded potential throughput can be achieved asymptotically; if $ 0 , ultra-densification leads in the extreme case to zero throughput."
]
} |
1503.04251 | 2261127636 | In this paper, we introduce a sophisticated path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions to study their impact on the performance of dense small cell networks (SCNs). Analytical results are obtained for the coverage probability and the area spectral efficiency (ASE), assuming both a general path loss model and a special case with a linear LoS probability function. The performance impact of LoS and NLoS transmissions in dense SCNs in terms of the coverage probability and the ASE is significant, both quantitatively and qualitatively, compared with the previous work that does not differentiate LoS and NLoS transmissions. Our analysis demonstrates that the network coverage probability first increases with the increase of the base station (BS) density, and then decreases as the SCN becomes denser. This decrease further makes the ASE suffer from a slow growth or even a decrease with network densification. The ASE will grow almost linearly as the BS density goes ultra dense. For practical regime of the BS density, the performance results derived from our analysis are distinctively different from previous results, and thus shed new insights on the design and deployment of future dense SCNs. | @cite_17 , the authors treated the event of LoS or NLoS transmission as a probabilistic event for a millimeter wave communication scenario. Specifically, the path loss associated with distance @math is formulated as & -0.3cm , where @math , @math and @math are the path loss function for the case of LoS transmission, the path loss function for the case of NLoS transmission and the LoS probability function, respectively. To simplify the analysis, the LoS probability function @math was approximated by a moment matched equivalent step function in @cite_17 . | {
"cite_N": [
"@cite_17"
],
"mid": [
"2031858701"
],
"abstract": [
"Millimeter wave (mmWave) holds promise as a carrier frequency for fifth generation cellular networks. Because mmWave signals are sensitive to blockage, prior models for cellular networks operated in the ultra high frequency (UHF) band do not apply to analyze mmWave cellular networks directly. Leveraging concepts from stochastic geometry, this paper proposes a general framework to evaluate the coverage and rate performance in mmWave cellular networks. Using a distance-dependent line-of-site (LOS) probability function, the locations of the LOS and non-LOS base stations are modeled as two independent non-homogeneous Poisson point processes, to which different path loss laws are applied. Based on the proposed framework, expressions for the signal-to-noise-and-interference ratio (SINR) and rate coverage probability are derived. The mmWave coverage and rate performance are examined as a function of the antenna geometry and base station density. The case of dense networks is further analyzed by applying a simplified system model, in which the LOS region of a user is approximated as a fixed LOS ball. The results show that dense mmWave networks can achieve comparable coverage and much higher data rates than conventional UHF cellular systems, despite the presence of blockages. The results suggest that the cell size to achieve the optimal SINR scales with the average size of the area that is LOS to a user."
]
} |
1503.04251 | 2261127636 | In this paper, we introduce a sophisticated path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions to study their impact on the performance of dense small cell networks (SCNs). Analytical results are obtained for the coverage probability and the area spectral efficiency (ASE), assuming both a general path loss model and a special case with a linear LoS probability function. The performance impact of LoS and NLoS transmissions in dense SCNs in terms of the coverage probability and the ASE is significant, both quantitatively and qualitatively, compared with the previous work that does not differentiate LoS and NLoS transmissions. Our analysis demonstrates that the network coverage probability first increases with the increase of the base station (BS) density, and then decreases as the SCN becomes denser. This decrease further makes the ASE suffer from a slow growth or even a decrease with network densification. The ASE will grow almost linearly as the BS density goes ultra dense. For practical regime of the BS density, the performance results derived from our analysis are distinctively different from previous results, and thus shed new insights on the design and deployment of future dense SCNs. | @cite_8 , the authors used the same path loss model as in ) and considered the approximation of @math as an exponentially decreasing function. The results in @cite_8 are less tractable than those in @cite_14 and @cite_17 . This is because the exponentially decreasing LoS probability function, albeit more practical than the step function in @cite_17 , is still difficult to deal with in the analysis. | {
"cite_N": [
"@cite_14",
"@cite_17",
"@cite_8"
],
"mid": [
"2134686390",
"2031858701",
"1533983785"
],
"abstract": [
"Existing cellular network analyses, and even simulations, typically use the standard path loss model where received power decays like @math over a distance @math . This standard path loss model is quite idealized, and in most scenarios the path loss exponent @math is itself a function of @math , typically an increasing one. Enforcing a single path loss exponent can lead to orders of magnitude differences in average received and interference powers versus the true values. In this paper, we study multi-slope path loss models, where different distance ranges are subject to different path loss exponents. We focus on the dual-slope path loss function, which is a piece-wise power law and continuous and accurately approximates many practical scenarios. We derive the distributions of SIR, SNR, and finally SINR before finding the potential throughput scaling, which provides insight on the observed cell-splitting rate gain. The exact mathematical results show that the SIR monotonically decreases with network density, while the converse is true for SNR, and thus the network coverage probability in terms of SINR is maximized at some finite density. With ultra-densification (network density goes to infinity), there exists a phase transition in the near-field path loss exponent @math : if @math unbounded potential throughput can be achieved asymptotically; if $ 0 , ultra-densification leads in the extreme case to zero throughput.",
"Millimeter wave (mmWave) holds promise as a carrier frequency for fifth generation cellular networks. Because mmWave signals are sensitive to blockage, prior models for cellular networks operated in the ultra high frequency (UHF) band do not apply to analyze mmWave cellular networks directly. Leveraging concepts from stochastic geometry, this paper proposes a general framework to evaluate the coverage and rate performance in mmWave cellular networks. Using a distance-dependent line-of-site (LOS) probability function, the locations of the LOS and non-LOS base stations are modeled as two independent non-homogeneous Poisson point processes, to which different path loss laws are applied. Based on the proposed framework, expressions for the signal-to-noise-and-interference ratio (SINR) and rate coverage probability are derived. The mmWave coverage and rate performance are examined as a function of the antenna geometry and base station density. The case of dense networks is further analyzed by applying a simplified system model, in which the LOS region of a user is approximated as a fixed LOS ball. The results show that dense mmWave networks can achieve comparable coverage and much higher data rates than conventional UHF cellular systems, despite the presence of blockages. The results suggest that the cell size to achieve the optimal SINR scales with the average size of the area that is LOS to a user.",
"The need to carry out analytical studies of wireless systems often motivates the usage of simplified models which, despite their tractability, can easily lead to an overestimation of the achievable performance. In the case of dense small cells networks, the standard single slope path-loss model has been shown to provide interesting, but supposedly too optimistic, properties such as the invariance of the outage coverage probability and of the spectral efficiency to the base station density. This paper seeks to explore the performance of dense small cells networks when a more accurate path-loss model is taken into account. We first propose a stochastic geometry based framework for small cell networks where the signal propagation accounts for both the Line-of-Sight (LOS) and Non-Line-Of-Sight (NLOS) components, such as the model provided by the 3GPP for evaluation of pico-cells in Heterogeneous Networks. We then study the performance of these networks and we show the dependency of some metrics such as the outage coverage probability, the spectral efficiency and Area Spectral Efficiency (ASE) on the base station density and on the LOS likelihood of the propagation environment. Specifically, we show that, with LOS NLOS propagation, dense networks still achieve large ASE gain but, at the same time, suffer from high outage probability."
]
} |
1503.04115 | 1924828008 | Sparse code formation in the primary visual cortex (V1) has been inspiration for many state-of-the-art visual recognition systems. To stimulate this behavior, networks are trained networks under mathematical constraint of sparsity or selectivity. In this paper, the authors exploit another approach which uses lateral interconnections in feature learning networks. However, instead of adding direct lateral interconnections among neurons, we introduce an inhibitory layer placed right after normal encoding layer. This idea overcomes the challenge of computational cost and complexity on lateral networks while preserving crucial objective of sparse code formation. To demonstrate this idea, we use sparse autoencoder as normal encoding layer and apply inhibitory layer. Early experiments in visual recognition show relative improvements over traditional approach on CIFAR-10 dataset. Moreover, simple installment and training process using Hebbian rule allow inhibitory layer to be integrated into existing networks, which enables further analysis in the future. | Lateral connections have been widely used in models in neuroscience. Our work is loosely based on E-I net @cite_4 . In E-I net, simple cells are divided into two types of neurons, excitatory neurons and inhibitory neurons. One inhibitory cell sends an amount of inhibitory current directly to all excitotary simple cells and other inhibitory cells. Inhibitory cells predict the redundant part of the network activity, thus decorrelate the activity of the excitatory cells by suppressing redundant spiking activity. In our networks, relationship between encoding cells is stored in connections between two layers. Inhibitory cells only do the computation based on encoding signals and the corresponding weights. In one recent work that shares the idea of local computation @cite_0 , spiking signal is chosen using winter-take-all rule while the rest is grounded to 0. Moreover, the competition range is predefined. In other works aiming at learning structural features such as structural sparse coding @cite_10 and topographical ICA @cite_1 , structural order emerges through regularization. Meanwhile, with inhibitory layer, neighborhood rather than order is obtained through ad hoc optimization. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_1",
"@cite_4"
],
"mid": [
"2099049980",
"2161977692",
"",
"2025539936"
],
"abstract": [
"Local competition among neighboring neurons is common in biological neural networks (NNs). In this paper, we apply the concept to gradient-based, backprop-trained artificial multilayer NNs. NNs with competing linear units tend to outperform those with non-competing nonlinear units, and avoid catastrophic forgetting when training sets change over time.",
"This work describes a conceptually simple method for structured sparse coding and dictionary design. Supposing a dictionary with K atoms, we introduce a structure as a set of penalties or interactions between every pair of atoms. We describe modifications of standard sparse coding algorithms for inference in this setting, and describe experiments showing that these algorithms are efficient. We show that interesting dictionaries can be learned for interactions that encode tree structures or locally connected structures. Finally, we show that our framework allows us to learn the values of the interactions from the data, rather than having them pre-specified.",
"",
"Sparse coding models of natural scenes can account for several physiological properties of primary visual cortex (V1), including the shapes of simple cell receptive fields (RFs) and the highly kurtotic firing rates of V1 neurons. Current spiking network models of pattern learning and sparse coding require direct inhibitory connections between the excitatory simple cells, in conflict with the physiological distinction between excitatory (glutamatergic) and inhibitory (GABAergic) neurons (Dale9s Law). At the same time, the computational role of inhibitory neurons in cortical microcircuit function has yet to be fully explained. Here we show that adding a separate population of inhibitory neurons to a spiking model of V1 provides conformance to Dale9s Law, proposes a computational role for at least one class of interneurons, and accounts for certain observed physiological properties in V1. When trained on natural images, this excitatory–inhibitory spiking circuit learns a sparse code with Gabor-like RFs as found in V1 using only local synaptic plasticity rules. The inhibitory neurons enable sparse code formation by suppressing predictable spikes, which actively decorrelates the excitatory population. The model predicts that only a small number of inhibitory cells is required relative to excitatory cells and that excitatory and inhibitory input should be correlated, in agreement with experimental findings in visual cortex. We also introduce a novel local learning rule that measures stimulus-dependent correlations between neurons to support “explaining away” mechanisms in neural coding."
]
} |
1503.04030 | 2963977506 | We consider the problem of maximizing the energy efficiency (EE) for a MIMO interference channel (IC), with the power constraint on each link. To obtain totally distributed solutions, this problem is formulated as a noncooperative game. We show that this game always admits a Nash equilibra (NE). Importantly, the sufficient condition that one can check to guarantee the uniqueness of the NE is derived. To reach the NE of this game, we provide a totally distributed EE algorithm, in which each player employs the fractional programming to update his own solution. These updates can be performed in a completely distributed and asynchronous fashion. Sufficient conditions that guarantee the convergence of the algorithm have been given as well. Simulation results show that the proposed algorithm converges fast and significantly outperforms the existing algorithms in terms of the sum-EE or the sum-rate. | In contrast to the most of the above cited papers which focus on the (weighted) sum SE problem, in this work we consider the EE maximization problem. For SE optimization problems, it is known that all the transmitters use full power during transmission in order to maximize its own SE. Based on this fact, the best response strategy at the NE can be written in a closed-form water-filling solution, which can be interpreted as a projector on the convex and closed set. This interpretation enables the authors to derive the uniqueness of the game's NE @cite_49 . However, the study of EE maximization problem cannot be obtained by employing the methodologies since the transmitters in fact use a portion of the power, instead of full, to achieve energy efficient transmission. | {
"cite_N": [
"@cite_49"
],
"mid": [
"2118809092"
],
"abstract": [
"This paper considers the noncooperative maximization of mutual information in the vector Gaussian interference channel in a fully distributed fashion via game theory. This problem has been widely studied in a number of works during the past decade for frequency-selective channels, and recently for the more general multiple-input multiple-output (MIMO) case, for which the state-of-the art results are valid only for nonsingular square channel matrices. Surprisingly, these results do not hold true when the channel matrices are rectangular and or rank deficient matrices. The goal of this paper is to provide a complete characterization of the MIMO game for arbitrary channel matrices, in terms of conditions guaranteeing both the uniqueness of the Nash equilibrium and the convergence of asynchronous distributed iterative waterfilling algorithms. Our analysis hinges on new technical intermediate results, such as a new expression for the MIMO waterfilling projection valid (also) for singular matrices, a mean-value theorem for complex matrix-valued functions, and a general contraction theorem for the multiuser MIMO watefilling mapping valid for arbitrary channel matrices. The quite surprising result is that uniqueness convergence conditions in the case of tall (possibly singular) channel matrices are more restrictive than those required in the case of (full rank) fat channel matrices. We also propose a modified game and algorithm with milder conditions for the uniqueness of the equilibrium and convergence, and virtually the same performance (in terms of Nash equilibria) of the original game."
]
} |
1503.03832 | 2096733369 | Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. | Similarly to other recent works which employ deep networks @cite_10 @cite_4 , our approach is a purely data driven method which learns its representation directly from the pixels of the face. Rather than using engineered features, we use a large dataset of labelled faces to attain the appropriate invariances to pose, illumination, and other variational conditions. | {
"cite_N": [
"@cite_10",
"@cite_4"
],
"mid": [
"2952304308",
"2145287260"
],
"abstract": [
"This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set.",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance."
]
} |
1503.03832 | 2096733369 | Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. | In this paper we explore two different deep network architectures that have been recently used to great success in the computer vision community. Both are deep convolutional networks @cite_20 @cite_0 . The first architecture is based on the Zeiler &Fergus @cite_22 model which consists of multiple interleaved layers of convolutions, non-linear activations, local response normalizations, and max pooling layers. We additionally add several @math convolution layers inspired by the work of @cite_19 . The second architecture is based on the model of Szegedy al which was recently used as the winning approach for ImageNet 2014 @cite_16 . These networks use mixed layers that run several different convolutional and pooling layers in parallel and concatenate their responses. We have found that these models can reduce the number of parameters by up to 20 times and have the potential to reduce the number of FLOPS required for comparable performance. | {
"cite_N": [
"@cite_22",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_20"
],
"mid": [
"2952186574",
"1498436455",
"",
"2950179405",
"2147800946"
],
"abstract": [
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.",
"",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification."
]
} |
1503.03832 | 2096733369 | Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. | The works of @cite_10 @cite_4 @cite_12 all employ a complex system of multiple stages, that combines the output of a deep convolutional network with PCA for dimensionality reduction and an SVM for classification. | {
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_4"
],
"mid": [
"2952304308",
"1780066064",
"2145287260"
],
"abstract": [
"This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set.",
"Face images in the wild undergo large intra-personal variations, such as poses, illuminations, occlusions, and low resolutions, which cause great challenges to face-related applications. This paper addresses this challenge by proposing a new deep learning framework that can recover the canonical view of face images. It dramatically reduces the intra-person variances, while maintaining the inter-person discriminativeness. Unlike the existing face reconstruction methods that were either evaluated in controlled 2D environment or employed 3D information, our approach directly learns the transformation from the face images with a complex set of variations to their canonical views. At the training stage, to avoid the costly process of labeling canonical-view images from the training set by hand, we have devised a new measurement to automatically select or synthesize a canonical-view image for each identity. As an application, this face recovery approach is used for face verification. Facial features are learned from the recovered canonical-view face images by using a facial component-based convolutional neural network. Our approach achieves the state-of-the-art performance on the LFW dataset.",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance."
]
} |
1503.03832 | 2096733369 | Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. | Zhenyao al @cite_12 employ a deep network to warp'' faces into a canonical frontal view and then learn CNN that classifies each face as belonging to a known identity. For face verification, PCA on the network output in conjunction with an ensemble of SVMs is used. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1780066064"
],
"abstract": [
"Face images in the wild undergo large intra-personal variations, such as poses, illuminations, occlusions, and low resolutions, which cause great challenges to face-related applications. This paper addresses this challenge by proposing a new deep learning framework that can recover the canonical view of face images. It dramatically reduces the intra-person variances, while maintaining the inter-person discriminativeness. Unlike the existing face reconstruction methods that were either evaluated in controlled 2D environment or employed 3D information, our approach directly learns the transformation from the face images with a complex set of variations to their canonical views. At the training stage, to avoid the costly process of labeling canonical-view images from the training set by hand, we have devised a new measurement to automatically select or synthesize a canonical-view image for each identity. As an application, this face recovery approach is used for face verification. Facial features are learned from the recovered canonical-view face images by using a facial component-based convolutional neural network. Our approach achieves the state-of-the-art performance on the LFW dataset."
]
} |
1503.03832 | 2096733369 | Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. | Taigman al @cite_4 propose a multi-stage approach that aligns faces to a general 3D shape model. A multi-class network is trained to perform the face recognition task on over four thousand identities. The authors also experimented with a so called Siamese network where they directly optimize the @math -distance between two face features. Their best performance on LFW ( @math ^2 @math 99.47 ). Both PCA and a Joint Bayesian model @cite_8 that effectively correspond to a linear transform in the embedding space are employed. Their method does not require explicit 2D 3D alignment. The networks are trained by using a combination of classification and verification loss. The verification loss is similar to the triplet loss we employ @cite_14 @cite_3 , in that it minimizes the @math -distance between faces of the same identity and enforces a margin between the distance of faces of different identities. The main difference is that only pairs of images are compared, whereas the triplet loss encourages a relative distance constraint. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_3",
"@cite_8"
],
"mid": [
"2118393783",
"2145287260",
"2106053110",
"170472577"
],
"abstract": [
"This paper presents a method for learning a distance metric from relative comparison such as \"A is closer to B than A is to C\". Taking a Support Vector Machine (SVM) approach, we develop an algorithm that provides a flexible way of describing qualitative training data as a set of constraints. We show that such constraints lead to a convex quadratic programming problem that can be solved by adapting standard methods for SVM training. We empirically evaluate the performance and the modelling flexibility of the algorithm on a collection of text documents.",
"In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.",
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"In this paper, we revisit the classical Bayesian face recognition method by Baback and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this \"difference\" formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4 test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10 ."
]
} |
1503.03832 | 2096733369 | Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. | A similar loss to the one used here was explored in Wang al @cite_21 for ranking images by semantic and visual similarity. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1975517671"
],
"abstract": [
"Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models."
]
} |
1503.03621 | 1736104063 | We study the complementary behaviors of external and internal examples in image restoration, and are motivated to formulate a composite dictionary design framework. The composite dictionary consists of the global part learned from external examples, and the sample-specific part learned from internal examples. The dictionary atoms in both parts are further adaptively weighted to emphasize their model statistics. Experiments demonstrate that the joint utilization of external and internal examples leads to substantial improvements, with successful applications in image denoising and super resolution. | The problem to be investigated resembles to a general problem in SC-based classification: how to adaptively build the relationship between dictionary atoms and class labels? Based on predefined relationships, current supervised dictionary learning (DL) methods are categorized into either learning a shared dictionary by all classes, which may be compact but not sufficiently discriminative @cite_18 ; or a class-specific dictionary with the opposite properties @cite_3 . In @cite_8 , the authors jointly learned a composite dictionary combing class-specific and shared dictionary atoms, with a latent matrix indicating the relationship between dictionary atoms and labels. | {
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_8"
],
"mid": [
"",
"2082855665",
"2088843485"
],
"abstract": [
"",
"Sparse signal models have been the focus of much recent research, leading to (or improving upon) state-of-the-art results in signal, image, and video restoration. This article extends this line of research into a novel framework for local image discrimination tasks, proposing an energy formulation with both sparse reconstruction and class discrimination components, jointly optimized during dictionary learning. This approach improves over the state of the art in texture segmentation experiments using the Brodatz database, and it paves the way for a novel scene analysis and recognition framework based on simultaneously learning discriminative and reconstructive dictionaries. Preliminary results in this direction using examples from the Pascal VOC06 and Graz02 datasets are presented as well.",
"Dictionary learning (DL) for sparse coding has shown promising results in classification tasks, while how to adaptively build the relationship between dictionary atoms and class labels is still an important open question. The existing dictionary learning approaches simply fix a dictionary atom to be either class-specific or shared by all classes beforehand, ignoring that the relationship needs to be updated during DL. To address this issue, in this paper we propose a novel latent dictionary learning (LDL) method to learn a discriminative dictionary and build its relationship to class labels adaptively. Each dictionary atom is jointly learned with a latent vector, which associates this atom to the representation of different classes. More specifically, we introduce a latent representation model, in which discrimination of the learned dictionary is exploited via minimizing the within-class scatter of coding coefficients and the latent-value weighted dictionary coherence. The optimal solution is efficiently obtained by the proposed solving algorithm. Correspondingly, a latent sparse representation based classifier is also presented. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse representation and dictionary learning approaches for action, gender and face recognition."
]
} |
1503.03621 | 1736104063 | We study the complementary behaviors of external and internal examples in image restoration, and are motivated to formulate a composite dictionary design framework. The composite dictionary consists of the global part learned from external examples, and the sample-specific part learned from internal examples. The dictionary atoms in both parts are further adaptively weighted to emphasize their model statistics. Experiments demonstrate that the joint utilization of external and internal examples leads to substantial improvements, with successful applications in image denoising and super resolution. | In analogy to the classification case, reconstruction-purpose dictionaries have been built from either external or internal examples. External exampled-based methods are known for their capabilities to produce plausible image appearances. However, there is no guarantee that an arbitrary input patch can be well matched or represented by a pre-fixed external set. When there is rarely any match for the input, external examples are prone to introduce either noise or oversmoothness @cite_7 . Meanwhile, the self similarity property supplies internal examples that are highly relevant to the input, but only of a limited number. Due to the insufficiency of internal examples, their mismatches often result in severe visual artifacts @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_7"
],
"mid": [
"2056370875",
"1601729531"
],
"abstract": [
"We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.",
"We propose a super-resolution method that exploits selfsimilarities and group structural information of image patches using only one single input frame. The super-resolution problem is posed as learning the mapping between pairs of low-resolution and high-resolution image patches. Instead of relying on an extrinsic set of training images as often required in example-based super-resolution algorithms, we employ a method that generates image pairs directly from the image pyramid of one single frame. The generated patch pairs are clustered for training a dictionary by enforcing group sparsity constraints underlying the image patches. Super-resolution images are then constructed using the learned dictionary. Experimental results show the proposed method is able to achieve the state-of-the-art performance."
]
} |
1503.03621 | 1736104063 | We study the complementary behaviors of external and internal examples in image restoration, and are motivated to formulate a composite dictionary design framework. The composite dictionary consists of the global part learned from external examples, and the sample-specific part learned from internal examples. The dictionary atoms in both parts are further adaptively weighted to emphasize their model statistics. Experiments demonstrate that the joint utilization of external and internal examples leads to substantial improvements, with successful applications in image denoising and super resolution. | The joint utilization of both external and self examples has been first studied for image denoising @cite_13 . Mosseri et. al. @cite_14 proposed that image patches had different preferences towards either external or self examples for denoising. Such a preference is in essence the tradeoff between noise-fitting versus signal-fitting. In @cite_15 @cite_12 , a joint super-resolution (SR) models was proposed to adaptively combine the advantages of both external and self example-based loss functions. @cite_10 further investigated the utilization of self-similarity into deep learning-based SR. However, none of the prior work makes much progress towards a unified dictionary design framework. | {
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2062811295",
"1968660339",
"2019515386",
"2952194607",
""
],
"abstract": [
"Statistics of ‘natural images’ provides useful priors for solving under-constrained problems in Computer Vision. Such statistics is usually obtained from large collections of natural images. We claim that the substantial internal data redundancy within a single natural image (e.g., recurrence of small image patches), gives rise to powerful internal statistics, obtained directly from the image itself. While internal patch recurrence has been used in various applications, we provide a parametric quantification of this property. We show that the likelihood of an image patch to recur at another image location can be expressed parametricly as a function of the spatial distance from the patch, and its gradient content. This “internal parametric prior” is used to improve existing algorithms that rely on patch recurrence. Moreover, we show that internal image-specific statistics is often more powerful than general external statistics, giving rise to more powerful image-specific priors. In particular: (i) Patches tend to recur much more frequently (densely) inside the same image, than in any random external collection of natural images. (ii) To find an equally good external representative patch for all the patches of an image, requires an external database of hundreds of natural images. (iii) Internal statistics often has stronger predictive power than external statistics, indicating that it may potentially give rise to more powerful image-specific priors.",
"Image denoising methods can broadly be classified into two types: “Internal Denoising” (denoising an image patch using other noisy patches within the noisy image), and “External Denoising” (denoising a patch using external clean natural image patches). Any such method, whether Internal or External, is typically applied to all image patches. In this paper we show that different image patches inherently have different preferences for Internal or External de-noising. Moreover, and surprisingly, the higher the noise in the image, the stronger the preference for Internal De-noising. We identify and explain the source of this behavior, and show that Internal External preference of a patch is directly related to its individual Signal-to-Noise-Ratio (“PatchSNR”). Patches with high PatchSNR (e.g., patches on strong edges) benefit much from External Denoising, whereas patches with low PatchSNR (e.g., patches in noisy uniform regions) benefit much more from Internal Denoising. Combining the power of Internal or External denoising selectively for each patch based on its estimated PatchSNR leads to improvement in denoising performance.",
"Existing example-based super resolution (SR) methods are built upon either external-examples or self-examples. Although effective in certain cases, both methods suffer from their inherent limitation. This paper goes beyond these two classes of most common example-based SR approaches, and proposes a novel joint SR perspective. The joint SR exploits and maximizes the complementary advantages of external- and self-example based methods. We elaborate on exploitable priors for image components of different nature, and formulate their corresponding loss functions mathematically. Equipped with that, we construct a unified SR formulation, and propose an iterative joint super resolution (IJSR) algorithm to solve the optimization. Such a joint perspective approach leads to an impressive improvement of SR results both quantitatively and qualitatively.",
"Deep learning has been successfully applied to image super resolution (SR). In this paper, we propose a deep joint super resolution (DJSR) model to exploit both external and self similarities for SR. A Stacked Denoising Convolutional Auto Encoder (SDCAE) is first pre-trained on external examples with proper data augmentations. It is then fine-tuned with multi-scale self examples from each input, where the reliability of self examples is explicitly taken into account. We also enhance the model performance by sub-model training and selection. The DJSR model is extensively evaluated and compared with state-of-the-arts, and show noticeable performance improvements both quantitatively and perceptually on a wide range of images.",
""
]
} |
1503.03893 | 1814624729 | Kernel approximation via nonlinear random feature maps is widely used in speeding up kernel machines. There are two main challenges for the conventional kernel approximation methods. First, before performing kernel approximation, a good kernel has to be chosen. Picking a good kernel is a very challenging problem in itself. Second, high-dimensional maps are often required in order to achieve good performance. This leads to high computational cost in both generating the nonlinear maps, and in the subsequent learning and prediction process. In this work, we propose to optimize the nonlinear maps directly with respect to the classification objective in a data-dependent fashion. The proposed approach achieves kernel approximation and kernel learning in a joint framework. This leads to much more compact maps without hurting the performance. As a by-product, the same framework can also be used to achieve more compact kernel maps to approximate a known kernel. We also introduce Circulant Nonlinear Maps, which uses a circulant-structured projection matrix to speed up the nonlinear maps for high-dimensional data. | Following the seminal work on explicit nonlinear feature maps for approximating positive definite shift-invariant kernels @cite_10 , nonlinear mapping techniques have been proposed to approximate other forms of kernels such as the polynomial kernel @cite_19 @cite_28 , generalized RBF kernels @cite_9 , intersection kernels @cite_20 , additive kernels @cite_16 , skewed multiplicative histogram kernels @cite_29 , and semigroup kernel @cite_34 . Techniques have also been proposed to improve the speed and compactness of kernel approximations by using structured projections @cite_47 , better quasi Monte Carlo sampling @cite_21 , binary code @cite_2 @cite_31 , and dimensionality reduction @cite_25 . Our method in this paper is built upon the Random Fourier Feature @cite_10 for approximating shift-invariant kernel, a widely used kernel type in machine learning. Besides explicit nonlinear maps, kernel approximation can also be achieved using sampling-based low-rank approximations of the kernel matrices such as the Nystrom method @cite_1 @cite_49 @cite_12 . In order for these approximations to work well, the eigenspectrum of the kernel matrix should have a large gap @cite_5 . | {
"cite_N": [
"@cite_31",
"@cite_28",
"@cite_9",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_34",
"@cite_19",
"@cite_5",
"@cite_49",
"@cite_2",
"@cite_12",
"@cite_47",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"",
"2115124532",
"1751437809",
"2147118250",
"2112545207",
"1988813039",
"2963755879",
"2107791152",
"",
"2017274648",
"",
"2118563516",
"2109235804",
"2144902422",
"2964265625",
"2541822344"
],
"abstract": [
"",
"",
"Kernel methods yield state-of-the-art performance in certain applications such as image classification and object detection. However, large scale problems require machine learning techniques of at most linear complexity and these are usually limited to linear kernels. This unfortunately rules out gold-standard kernels such as the generalized RBF kernels (e.g. exponential-c 2 ). Recently, Maji and Berg [13] and Vedaldi and Zisserman [20] proposed explicit feature maps to approximate the additive kernels (intersection, c 2 , etc.) by linear ones, thus enabling the use of fast machine learning technique in a non-linear context. An analogous technique was proposed by Rahimi and Recht [14] for the translation invariant RBF kernels. In this paper, we complete the construction and combine the two techniques to obtain explicit feature maps for the generalized RBF kernels. Furthermore, we investigate a learning method using l 1 regularization to encourage sparsity in the final vector representation, and thus reduce its dimension. We evaluate this technique on the VOC 2007 detection challenge, showing when it can improve on fast additive kernels, and the trade-offs in complexity and accuracy.",
"Approximations based on random Fourier features have recently emerged as an efficient and elegant methodology for designing large-scale kernel machines [4]. By expressing the kernel as a Fourier expansion, features are generated based on a finite set of random basis projections with inner products that are Monte Carlo approximations to the original kernel. However, the original Fourier features are only applicable to translation-invariant kernels and are not suitable for histograms that are always non-negative. This paper extends the concept of translation-invariance and the random Fourier feature methodology to arbitrary, locally compact Abelian groups. Based on empirical observations drawn from the exponentiated χ2 kernel, the state-of-the-art for histogram descriptors, we propose a new group called the skewed-multiplicative group and design translation-invariant kernels on it. Experiments show that the proposed kernels outperform other kernels that can be similarly approximated. In a semantic segmentation experiment on the PASCAL VOC 2009 dataset, the approximation allows us to train large-scale learning machines more than two orders of magnitude faster than previous nonlinear SVMs.",
"We consider the problem of improving the efficiency of randomized Fourier feature maps to accelerate training and testing speed of kernel methods on large datasets. These approximate feature maps arise as Monte Carlo approximations to integral representations of shift-invariant kernel functions (e.g., Gaussian kernel). In this paper, we propose to use Quasi-Monte Carlo (QMC) approximations instead, where the relevant integrands are evaluated on a low-discrepancy sequence of points as opposed to random point sets as in the Monte Carlo approach. We derive a new discrepancy measure called box discrepancy based on theoretical characterizations of the integration error with respect to a given sequence. We then propose to learn QMC sequences adapted to our setting based on explicit box discrepancy minimization. Our theoretical analyses are complemented with empirical results that demonstrate the effectiveness of classical and adaptive QMC techniques for this problem.",
"A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix can be computed by the Nystrom method (which is used for the numerical solution of eigenproblems). This is achieved by carrying out an eigendecomposition on a smaller system of size m < n, and then expanding the results back up to n dimensions. The computational complexity of a predictor using this approximation is O(m2n). We report experiments on the USPS and abalone data sets and show that we can set m ≪ n without any significant decrease in the accuracy of the solution.",
"With the goal of accelerating the training and testing complexity of nonlinear kernel methods, several recent papers have proposed explicit embeddings of the input data into low-dimensional feature spaces, where fast linear methods can instead be used to generate approximate solutions. Analogous to random Fourier feature maps to approximate shift-invariant kernels, such as the Gaussian kernel, on Rd, we develop a new randomized technique called random Laplace features, to approximate a family of kernel functions adapted to the semigroup structure of R+d. This is the natural algebraic structure on the set of histograms and other non-negative data representations. We provide theoretical results on the uniform convergence of random Laplace features. Empirical analyses on image classification and surveillance event detection tasks demonstrate the attractiveness of using random Laplace features relative to several other feature maps proposed in the literature.",
"Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to dene randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high condence.",
"Both random Fourier features and the Nystrom method have been successfully applied to efficient kernel learning. In this work, we investigate the fundamental difference between these two approaches, and how the difference could affect their generalization performances. Unlike approaches based on random Fourier features where the basis functions (i.e., cosine and sine functions) are sampled from a distribution independent from the training data, basis functions used by the Nystrom method are randomly sampled from the training examples and are therefore data dependent. By exploring this difference, we show that when there is a large gap in the eigen-spectrum of the kernel matrix, approaches based on the Nystrom method can yield impressively better generalization error bound than random Fourier features based approach. We empirically verify our theoretical findings on a wide range of large data sets.",
"",
"This paper analyzes circulant Johnson-Lindenstrauss (JL) embeddings which, as an important class of structured random JL embeddings, are formed by randomizing the column signs of a circulant matrix generated by a random vector. With the help of recent decoupling techniques and matrix-valued Bernstein inequalities, we obtain a new bound @math for Gaussian circulant JL embeddings. Moreover, by using the Laplace transform technique (also called Bernstein's trick), we extend the result to subgaussian case. The bounds in this paper offer a small improvement over the current best bounds for Gaussian circulant JL embeddings for certain parameter regimes and are derived using more direct methods.",
"",
"Despite their successes, what makes kernel methods difficult to use in many large scale problems is the fact that computing the decision function is typically expensive, especially at prediction time. In this paper, we overcome this difficulty by proposing Fastfood, an approximation that accelerates such computation significantly. Key to Fastfood is the observation that Hadamard matrices when combined with diagonal Gaussian matrices exhibit properties similar to dense Gaussian random matrices. Yet unlike the latter, Hadamard and diagonal matrices are inexpensive to multiply and store. These two matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks (Rahimi & Recht, 2007) and thereby speeding up the computation for a large range of kernel functions. Specifically, Fastfood requires O(n log d) time and O(n) storage to compute n non-linear basis functions in d dimensions, a significant improvement from O(nd) computation and storage, without sacrificing accuracy. We prove that the approximation is unbiased and has low variance. Extensive experiments show that we achieve similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory. These improvements, especially in terms of memory usage, make kernel methods more practical for applications that have large training sets and or require real-time prediction.",
"Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger's, and χ2 kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finite-dimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ2. We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train test times of SVMs. We also compare with two other approximation methods: Nystrom's approximation of [1], which is data dependent, and the explicit map of Maji and Berg [2] for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech-101 [3], Daimler-Chrysler pedestrians [4], and INRIA pedestrians [5].",
"To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shift-invariant kernel. We explore two sets of random features, provide convergence bounds on their ability to approximate various radial basis kernels, and show that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel machines.",
"Kernel approximation using random feature maps has recently gained a lot of interest. This is mainly due to their applications in reducing training and testing times of kernel based learning algorithms. In this work, we identify that previous approaches for polynomial kernel approximation create maps that can be rank deficient, and therefore may not utilize the capacity of the projected feature space effectively. To address this challenge, we propose compact random feature maps (CRAFTMaps) to approximate polynomial kernels more concisely and accurately. We prove the error bounds of CRAFTMaps demonstrating their superior kernel reconstruction performance compared to the previous approximation schemes. We show how structured random matrices can be used to efficiently generate CRAFTMaps, and present a single-pass algorithm using CRAFTMaps to learn non-linear multi-class classifiers. We present experiments on multiple standard data-sets with performance competitive with state-of-the-art results.",
"We present methods for training high quality object detectors very quickly. The core contribution is a pair of fast training algorithms for piece-wise linear classifiers, which can approximate arbitrary additive models. The classifiers are trained in a max-margin framework and significantly outperform linear classifiers on a variety of vision datasets. We report experimental results quantifying training time and accuracy on image classification tasks and pedestrian detection, including detection results better than the best previous on the INRIA dataset with faster training."
]
} |
1503.03893 | 1814624729 | Kernel approximation via nonlinear random feature maps is widely used in speeding up kernel machines. There are two main challenges for the conventional kernel approximation methods. First, before performing kernel approximation, a good kernel has to be chosen. Picking a good kernel is a very challenging problem in itself. Second, high-dimensional maps are often required in order to achieve good performance. This leads to high computational cost in both generating the nonlinear maps, and in the subsequent learning and prediction process. In this work, we propose to optimize the nonlinear maps directly with respect to the classification objective in a data-dependent fashion. The proposed approach achieves kernel approximation and kernel learning in a joint framework. This leads to much more compact maps without hurting the performance. As a by-product, the same framework can also be used to achieve more compact kernel maps to approximate a known kernel. We also introduce Circulant Nonlinear Maps, which uses a circulant-structured projection matrix to speed up the nonlinear maps for high-dimensional data. | Besides kernel approximation, there have been other types of works aiming at speeding up kernel machine @cite_36 . Such techniques include decomposition methods @cite_39 @cite_42 , sparsifying kernels @cite_33 , limiting the number of support vectors @cite_46 @cite_44 , and low-rank approximations @cite_27 @cite_26 . None of the above methods can be scaled to truly large-scale data. Another alternative is to consider the local structure of the data to train and apply the kernel machines locally @cite_40 @cite_22 @cite_13 @cite_17 . However, partitioning becomes unreliable in high-dimensional data. Our work is also related to shallow neural networks as we will discuss in later part of this paper. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_42",
"@cite_39",
"@cite_44",
"@cite_27",
"@cite_40",
"@cite_46",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"1970950689",
"",
"1526741802",
"",
"2137346077",
"",
"2137557016",
"2189508540",
"2137285073",
"",
""
],
"abstract": [
"",
"Given a matrix A, it is often desirable to find a good approximation to A that has low rank. We introduce a simple technique for accelerating the computation of such approximations when A has strong spectral features, that is, when the singular values of interest are significantly greater than those of a random matrix with size and entries similar to A. Our technique amounts to independently sampling and or quantizing the entries of A, thus speeding up computation by reducing the number of nonzero entries and or the length of their representation. Our analysis is based on observing that the acts of sampling and quantization can be viewed as adding a random matrix N to A, whose entries are independent random variables with zero-mean and bounded variance. Since, with high probability, N has very weak spectral features, we can prove that the effect of sampling and quantization nearly vanishes when a low-rank approximation to A p N is computed. We give high probability bounds on the quality of our approximation both in the Frobenius and the 2-norm.",
"",
"Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.ContributorsLeon Bottou, Yoshua Bengio, Stephane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Gaelle Loosli, Joaquin Quinonero-Candela, Carl Edward Rasmussen, Gunnar Ratsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, Soren Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-TovLeon Bottou is a Research Scientist at NEC Labs America. Olivier Chapelle is with Yahoo! Research. He is editor of Semi-Supervised Learning (MIT Press, 2006). Dennis DeCoste is with Microsoft Research. Jason Weston is a Research Scientist at NEC Labs America.",
"",
"The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT &T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM''s over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM''s, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.",
"",
"SVM training is a convex optimization problem which scales with the training set size rather than the feature space dimension. While this is usually considered to be a desired quality, in large scale problems it may cause training to be impractical. The common techniques to handle this difficulty basically build a solution by solving a sequence of small scale subproblems. Our current effort is concentrated on the rank of the kernel matrix as a source for further enhancement of the training procedure. We first show that for a low rank kernel matrix it is possible to design a better interior point method (IPM) in terms of storage requirements as well as computational complexity. We then suggest an efficient use of a known factorization technique to approximate a given kernel matrix by a low rank matrix, which in turn will be used to feed the optimizer. Finally, we derive an upper bound on the change in the objective function value based on the approximation error and the number of active constraints (support vectors). This bound is general in the sense that it holds regardless of the approximation method.",
"Linear support vector machines (SVMs) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based SVMs are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their response can be represented only as a function of the set of support vectors, which has been experimentally shown to grow linearly with the size of the training set. In this paper we propose a novel locally linear svm classifier with smooth decision boundary and bounded curvature. We show how the functions defining the classifier can be approximated using local codings and show how this model can be optimized in an online fashion by performing stochastic gradient descent with the same convergence guarantees as standard gradient descent method for linear svm. Our method achieves comparable performance to the state-of-the-art whilst being significantly faster than competing kernel SVMs. We generalise this model to locally finite dimensional kernel SVM.",
"Support vector machines (SVMs), though accurate, are not preferred in applications requiring great classification speed, due to the number of support vectors being large. To overcome this problem we devise a primal method with the following properties: (1) it decouples the idea of basis functions from the concept of support vectors; (2) it greedily finds a set of kernel basis functions of a specified maximum size (dmax) to approximate the SVM primal cost function well; (3) it is efficient and roughly scales as O(ndmax2) where n is the number of training examples; and, (4) the number of basis functions it requires to achieve an accuracy close to the SVM accuracy is usually far less than the number of SVM support vectors.",
"",
""
]
} |
1503.03465 | 2121404865 | Intel and AMD support the carry-less multiplication (CLMUL) instruction set in their x64 processors. We use CLMUL to implement an almost universal 64-bit hash family (CLHASH). We compare this new family with what might be the fastest almost universal family on x64 processors (VHASH). We find that CLHASH is at least 60 faster. We also compare CLHASH with a popular hash function designed for speed (Google’s CityHash). We find that CLHASH is 40 faster than CityHash on inputs larger than 64 bytes and just as fast otherwise. | The work that lead to the design of the pclmulqdq instruction by Gueron and Kounavis @cite_24 introduced efficient algorithms using this instruction, e.g., an algorithm for 128-bit modular reduction in Galois Counter Mode. Since then, the pclmulqdq instruction has been used to speed up cryptographic applications. Su and Fan find that the Karatsuba formula becomes especially efficient for software implementations of multiplication in binary finite fields due to the pclmulqdq instruction @cite_22 . @cite_4 used the CLMUL instruction set for 256-bit hash functions on the Westmere microarchitecture. Elliptic curve cryptography benefits from the pclmulqdq instruction @cite_23 @cite_11 @cite_15 . Bluhm and Gueron pointed out that the benefits are increased on the Haswell microarchitecture due to the higher throughput and lower latency of the instruction @cite_2 . | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_11"
],
"mid": [
"2146395537",
"2057926105",
"2049051506",
"2197430430",
"2021469490",
"1999613060",
"1986441862"
],
"abstract": [
"In this work, we provide a software benchmark for a large range of 256-bit blockcipher-based hash functions. We instantiate the underlying blockcipher with AES, which allows us to exploit the recent AES instruction set (AESNI). Since AES itself only outputs 128 bits, we consider double-blocklength constructions, as well as (single-block-length) constructions based on RIJNDAEL- 256. Although we primarily target architectures supporting AES-NI, our framework has much broader applications by estimating the performance of these hash functions on any (micro-)architecture given AES-benchmark results. As far as we are aware, this is the first comprehensive performance comparison of multiblocklength hash functions in software.",
"PCLMULQDQ, a new instruction that supports GF(2)[x] multiplication, was introduced by Intel in 2010. This instruction brings dramatic change to software implementation of multiplication in GF(2^m) fields. In this paper, we present improved Karatsuba formulae for multiplying two small binary polynomials, compare different strategies for PCLMULQDQ-based multiplication in the five GF(2^m) fields recommended by NIST and conclude the best design approaches to software implementation of GF(2)[x] multiplication.",
"This paper describes a new method for efficient implementation of the Galois Counter Mode on general purpose processors. Our approach is based on three concepts: a) having a 64-bit carry-less multiplication instruction in the processor; b) a method for using this instruction to efficiently multiply binary polynomials of degree 127; c) a method for efficient reduction of a binary polynomial of degree 254, modulo the polynomial x^1^2^8+x^7+x^2+x+1 (which defines the finite field of the Galois Counter Mode). The two latter concepts can be used for writing an efficient and lookup-table free software implementation of the Galois Counter Mode, for processors that have a carry-less multiplication instruction. Our approach uses only a generic carry-less multiplication instruction, without any field-specific reduction logic, making the instruction applicable to multiple use cases, and therefore an appealing addition to the instruction set of a general purpose processor. This research played a significant role in the process that eventually led to adding a carry-less multiplication instruction (called PCLMULQDQ) to the Intel Architecture. PCLMULQDQ and six AES instructions are introduced in the new 2010 Intel Core processor family, based on the 32 nm Intel microarchitecture codename ''Westmere''. On the new Westmere processors, the software that implements the methods described here, computes AES-GCM more than six times faster than the current, lookup table-based, state-of-the-art implementation. This new capability adds motivation to using AES-GCM for high performance secure networking.",
"In this paper we introduce new methods for computing constant-time variable-base point multiplications over the Galbraith-Lin-Scott (GLS) and the Koblitz families of elliptic curves. Using a left-to-right double-and-add and a right-to-left halve-and-add Montgomery ladder over a GLS curve, we present some of the fastest timings yet reported in the literature for point multiplication. In addition, we combine these two procedures to compute a multi-core protected scalar multiplication. Furthermore, we designed a novel regular ( )-adic scalar expansion for Koblitz curves. As a result, using the regular recoding approach, we set the speed record for a single-core constant-time point multiplication on standardized binary elliptic curves at the (128 )-bit security level.",
"This paper presents an efficient and side-channel-protected software implementation of scalar multiplication for the standard National Institute of Standards and Technology (NIST) and Standards for Efficient Cryptography Group binary elliptic curves. The enhanced performance is achieved by leveraging Intel’s AVX architecture and utilizing the pclmulqdq processor instruction. The fast carry-less multiplication is further used to speed up the reduction on the Haswell platform. For the five NIST curves over (GF(2^m) ) with (m ) ( ) ( 163,233,283,409,571 ), the resulting scalar multiplication implementation is about 5–12 times faster than that of OpenSSL-1.0.1e, enhancing the ECDHE and ECDSA algorithms significantly.",
"The availability of a new carry-less multiplication instruction in the latest Intel desktop processors significantly accelerates multiplication in binary fields and hence presents the opportunity for reevaluating algorithms for binary field arithmetic and scalar multiplication over elliptic curves. We describe how to best employ this instruction in field multiplication and the effect on performance of doubling and halving operations. Alternate strategies for implementing inversion and half-trace are examined to restore most of their competitiveness relative to the new multiplier. These improvements in field arithmetic are complemented by a study on serial and parallel approaches for Koblitz and random curves, where parallelization strategies are implemented and compared. The contributions are illustrated with experimental results improving the state-of-the-art performance of halving and doubling-based scalar multiplication on NIST curves at the 112- and 192-bit security levels and a new speed record for side-channel-resistant scalar multiplication in a random curve at the 128-bit security level. The algorithms presented in this work were implemented on Westmere and Sandy Bridge processors, the latest generation Intel microarchitectures.",
"In this work, we present new arithmetic formulas for a projective version of the affine point representation ((x,x+y x), ) for (x 0, ) which leads to an efficient computation of the scalar multiplication operation over binary elliptic curves. A software implementation of our formulas applied to a binary Galbraith–Lin–Scott elliptic curve defined over the field ( F _ 2^ 254 ) allows us to achieve speed records for protected unprotected single multi-core random-point elliptic curve scalar multiplication at the 127-bit security level. When executed on a Sandy Bridge 3.4 GHz Intel Xeon processor, our software is able to compute a single multi-core unprotected scalar multiplication in 69,500 and 47,900 clock cycles, respectively, and a protected single-core scalar multiplication in 114,800 cycles. These numbers are improved by around 2 and 46 on the newer Ivy Bridge and Haswell platforms, respectively, achieving in the latter a protected random-point scalar multiplication in 60,000 clock cycles."
]
} |
1503.03465 | 2121404865 | Intel and AMD support the carry-less multiplication (CLMUL) instruction set in their x64 processors. We use CLMUL to implement an almost universal 64-bit hash family (CLHASH). We compare this new family with what might be the fastest almost universal family on x64 processors (VHASH). We find that CLHASH is at least 60 faster. We also compare CLHASH with a popular hash function designed for speed (Google’s CityHash). We find that CLHASH is 40 faster than CityHash on inputs larger than 64 bytes and just as fast otherwise. | In previous work, we used the pclmulqdq instruction for fast 32-bit random hashing on the Sandy Bridge and Bulldozer architectures @cite_38 . However, our results were disappointing, due in part to the low throughput of the instruction on these older microarchitectures. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2150574820"
],
"abstract": [
"We present fast strongly universal string hashing families: they can process data at a rate of 0.2 CPU cycle per byte. Maybe surprisingly, we find that these families—though they require a large buffer of random numbers—are often faster than popular hash functions with weaker theoretical guarantees. Moreover, conventional wisdom is that hash functions with fewer multiplications are faster. Yet we find that they may fail to be faster due to operation pipelining. We present experimental results on several processors including low-power processors. Our tests include hash functions designed for processors with the carry-less multiplication instruction set. We also prove, using accessible proofs, the strong universality of our families."
]
} |
1503.03167 | 2953255770 | This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative results of the model's efficacy at learning a 3D rendering engine. | As mentioned before, a number of generative models have been proposed in the literature to obtain abstract visual representations. Unlike most RBM-based models @cite_21 @cite_4 @cite_23 , our approach is trained using back-propagation with objective function consisting of data reconstruction and the variational bound. | {
"cite_N": [
"@cite_21",
"@cite_4",
"@cite_23"
],
"mid": [
"2136922672",
"189596042",
"2130325614"
],
"abstract": [
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.",
"We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. Data-dependent expectations are estimated using a variational approximation that tends to focus on a single mode, and dataindependent expectations are approximated using persistent Markov chains. The use of two quite different techniques for estimating the two types of expectation that enter into the gradient of the log-likelihood makes it practical to learn Boltzmann machines with multiple hidden layers and millions of parameters. The learning can be made more efficient by using a layer-by-layer “pre-training” phase that allows variational inference to be initialized with a single bottomup pass. We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks.",
"There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images."
]
} |
1503.03167 | 2953255770 | This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative results of the model's efficacy at learning a 3D rendering engine. | Recently, @cite_8 proposed using CNNs to generate images given object-specific parameters in a supervised setting. As their approach requires ground-truth labels for the layer, it cannot be directly applied to image interpretation tasks. Our work is similar to Ranzato @cite_11 , whose work was amongst the first to use a generic encoder-decoder architecture for feature learning. However, in comparison to our proposal their model was trained layer-wise, the intermediate representations were not disentangled like a , and their approach does not use the variational auto-encoder loss to approximate the posterior distribution. Our work is also similar in spirit to @cite_5 , but in comparison our model does not assume a Lambertian reflectance model and implicitly constructs the 3D representations. Another piece of related work is Desjardins @cite_12 , who used a spike and slab prior to factorize representations in a generative deep network. | {
"cite_N": [
"@cite_5",
"@cite_11",
"@cite_12",
"@cite_8"
],
"mid": [
"2952204419",
"2139427956",
"1786904711",
"1893585201"
],
"abstract": [
"Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representations. In this paper, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition.",
"We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64 error on MNIST, and 54 average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.",
"Here we propose a novel model family with the objective of learning to disentangle the factors of variation in data. Our approach is based on the spike-and-slab restricted Boltzmann machine which we generalize to include higher-order interactions among multiple latent variables. Seen from a generative perspective, the multiplicative interactions emulates the entangling of factors of variation. Inference in the model can be seen as disentangling these generative factors. Unlike previous attempts at disentangling latent factors, the proposed model is trained using no supervised information regarding the latent factors. We apply our model to the task of facial expression classification.",
"We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task."
]
} |
1503.03167 | 2953255770 | This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative results of the model's efficacy at learning a 3D rendering engine. | In comparison to existing approaches, it is important to note that our encoder network produces the interpretable and disentangled representations necessary to learn a meaningful 3D graphics engine. A number of inverse-graphics inspired methods have recently been proposed in the literature @cite_17 @cite_6 @cite_0 . However, most such methods rely on hand-crafted rendering engines. The exception to this is work by Hinton @cite_3 and Tieleman @cite_16 on which use a domain-specific decoder to reconstruct input images. Our work is similar in spirit to these works but has some key differences: (a) It uses a very generic convolutional architecture in the encoder and decoder networks to enable efficient learning on large datasets and image sizes; (b) it can handle single static frames as opposed to pair of images required in @cite_3 ; and (c) it is generative. | {
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_16",
"@cite_17"
],
"mid": [
"",
"2949727452",
"183071939",
"2185466002",
"7824477"
],
"abstract": [
"",
"The idea of computer vision as the Bayesian inverse problem to computer graphics has a long history and an appealing elegance, but it has proved difficult to directly implement. Instead, most vision tasks are approached via complex bottom-up processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics programs consist of a stochastic scene generator, a renderer based on graphics software, a stochastic likelihood model linking the renderer's output and the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood model. Representations and algorithms from computer graphics, originally designed to produce high-quality images, are instead used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer graphics, and approximate Bayesian computation, and depends only on general-purpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured alphanumeric characters, and inferring 3D road models from vehicle-mounted camera images. Each of the probabilistic graphics programs we present relies on under 20 lines of probabilistic code, and supports accurate, approximately Bayesian inferences about ambiguous real-world images.",
"Inverse graphics attempts to take sensor data and infer 3D geometry, illumination, materials, and motions such that a graphics renderer could realistically reproduce the observed scene. Renderers, however, are designed to solve the forward process of image synthesis. To go in the other direction, we propose an approximate differentiable renderer (DR) that explicitly models the relationship between changes in model parameters and image observations. We describe a publicly available OpenDR framework that makes it easy to express a forward graphics model and then automatically obtain derivatives with respect to the model parameters and to optimize over them. Built on a new auto-differentiation package and OpenGL, OpenDR provides a local optimization method that can be incorporated into probabilistic programming frameworks. We demonstrate the power and simplicity of programming with OpenDR by using it to solve the problem of estimating human body shape from Kinect depth and RGB data.",
"Optimizing Neural Networks that Generate Images Tijmen Tieleman Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2014 Image recognition, also known as computer vision, is one of the most prominent applications of neural networks. The image recognition methods presented in this thesis are based on the reverse process: generating images. Generating images is easier than recognizing them, for the computer systems that we have today. This work leverages the ability to generate images, for the purpose of recognizing other",
"Computer vision is hard because of a large variability in lighting, shape, and texture; in addition the image signal is non-additive due to occlusion. Generative models promised to account for this variability by accurately modelling the image formation process as a function of latent variables with prior beliefs. Bayesian posterior inference could then, in principle, explain the observation. While intuitively appealing, generative models for computer vision have largely failed to deliver on that promise due to the difficulty of posterior inference. As a result the community has favored efficient discriminative approaches. We still believe in the usefulness of generative models in computer vision, but argue that we need to leverage existing discriminative or even heuristic computer vision methods. We implement this idea in a principled way in our informed sampler and in careful experiments demonstrate it on challenging models which contain renderer programs as their components. The informed sampler, using simple discriminative proposals based on existing computer vision technology achieves dramatic improvements in inference. Our approach enables a new richness in generative models that was out of reach with existing inference technology."
]
} |
1503.02519 | 1518172056 | Wireless underground sensor networks (WUSNs) can enable many important applications such as intelligent agriculture, pipeline fault diagnosis, mine disaster rescue, concealed border patrol, crude oil exploration, among others. The key challenge to realize WUSNs is the wireless communication in underground environments. Most existing wireless communication systems utilize the dipole antenna to transmit and receive propagating electromagnetic (EM) waves, which do not work well in underground environments due to the very high material absorption loss. The Magnetic Induction (MI) technique provides a promising alternative solution that could address the current problem in underground. Although the MI-based underground communication has been intensively investigated theoretically, to date, seldom effort has been made in developing a testbed for the MI-based underground communication that can validate the theoretical results. In this paper, a testbed of MI-based communication system is designed and implemented in an in-lab underground environment. The testbed realizes and tests not only the original MI mechanism that utilizes single coil but also recent developed techniques that use the MI waveguide and the 3-directional (3D) MI coils. The experiments are conducted in an in-lab underground environment with reconfigurable environmental parameters such as soil composition and water content. This paper provides the principles and guidelines for developing the MI underground communications testbed, which is very complicated and time-consuming due to the new communication mechanism and the new wireless transmission medium. | The concept of WUSNs is first introduced in @cite_2 , after which many novel applications are presented based on the WUSNs. In @cite_7 , a WUSN is implemented to monitor the underground tunnels to ensure safe working conditions in coal mines. In @cite_3 , the WUSNs are deployed during the hydraulic fracturing process in the crude oil extraction, which can provide real-time physical and chemical measurements deep inside oil reservoirs. In [6,7], the WUSNs are utilized for pipeline leakage detection where MI communications are used to connect the sensors along pipelines. | {
"cite_N": [
"@cite_3",
"@cite_7",
"@cite_2"
],
"mid": [
"2077428143",
"2115513651",
"2170195314"
],
"abstract": [
"The real-time and in-situ monitoring capability in oil reservoirs is highly desired to increase the current recovery factor of crude oil and natural gas. To this end, the wireless sensor networks (WSNs) are envisioned to be deployed deep inside oil reservoirs to collect and report the physical and chemical information in real time. However, none of the existing wireless communication and networking technologies can support WSNs in oil reservoirs due to the very challenging environment and the extremely small device size. To address the problem, this paper proposes a new self-contained micro wireless sensor network framework based on the Magnetic Induction (MI) technique, which can enable the real-time and in-situ monitoring in oil reservoirs. Rigorous analytical models are developed to characterize the oil reservoir channel for both MI communication and energy transfer, which confirm the feasibility of the proposed self-contained sensor network framework. To enhance the system efficiency and reliability, high-permeability proppants are injected in the hydraulic fracture to increase mutual induction; while the tri-directional MI coil antenna is designed to achieve omnidirectional coverage. The theoretical models and numerical results are validated by the widely used finite element simulation software COMSOL Multiphysics.",
"Environment monitoring in coal mines is an important application of wireless sensor networks (WSNs) that has commercial potential. We discuss the design of a structure-aware self-adaptive WSN system, SASA. By regulating the mesh sensor network deployment and formulating a collaborative mechanism based on a regular beacon strategy, SASA is able to rapidly detect structure variations caused by underground collapses. A prototype is deployed with 27 Mica2 motes. We present our implementation experiences as well as the experimental results. To better evaluate the scalability and reliability of SASA, we also conduct a large-scale trace-driven simulation based on real data collected from the experiments.",
"This work introduces the concept of a Wireless Underground Sensor Network (WUSN). WUSNs can be used to monitor a variety of conditions, such as soil properties for agricultural applications and toxic substances for environmental monitoring. Unlike existing methods of monitoring underground conditions, which rely on buried sensors connected via wire to the surface, WUSN devices are deployed completely belowground and do not require any wired connections. Each device contains all necessary sensors, memory, a processor, a radio, an antenna, and a power source. This makes their deployment much simpler than existing underground sensing solutions. Wireless communication within a dense substance such as soil or rock is, however, significantly more challenging than through air. This factor, combined with the necessity to conserve energy due to the difficulty of unearthing and recharging WUSN devices, requires that communication protocols be redesigned to be as efficient as possible. This work provides an extensive overview of applications and design challenges for WUSNs, challenges for the underground communication channel including methods for predicting path losses in an underground link, and challenges at each layer of the communication protocol stack. � 2006 Elsevier B.V. All rights reserved."
]
} |
1503.02519 | 1518172056 | Wireless underground sensor networks (WUSNs) can enable many important applications such as intelligent agriculture, pipeline fault diagnosis, mine disaster rescue, concealed border patrol, crude oil exploration, among others. The key challenge to realize WUSNs is the wireless communication in underground environments. Most existing wireless communication systems utilize the dipole antenna to transmit and receive propagating electromagnetic (EM) waves, which do not work well in underground environments due to the very high material absorption loss. The Magnetic Induction (MI) technique provides a promising alternative solution that could address the current problem in underground. Although the MI-based underground communication has been intensively investigated theoretically, to date, seldom effort has been made in developing a testbed for the MI-based underground communication that can validate the theoretical results. In this paper, a testbed of MI-based communication system is designed and implemented in an in-lab underground environment. The testbed realizes and tests not only the original MI mechanism that utilizes single coil but also recent developed techniques that use the MI waveguide and the 3-directional (3D) MI coils. The experiments are conducted in an in-lab underground environment with reconfigurable environmental parameters such as soil composition and water content. This paper provides the principles and guidelines for developing the MI underground communications testbed, which is very complicated and time-consuming due to the new communication mechanism and the new wireless transmission medium. | To setup the theoretical fundamentals of the wireless underground communications, the channel models for the propagation of EM waves in underground environments are discussed in [9,13,14]. The EM wave-based underground communication testbed is developed in @cite_4 . The results from the testbed show that if there is no aboveground device, the EM wave-based communication in pure underground channel encounters prohibitively high path loss. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2020513309"
],
"abstract": [
"Wireless Underground Sensor Networks (WUSNs) constitute one of the promising application areas of the recently developed wireless sensor networking techniques. WUSN is a specialized kind of Wireless Sensor Network (WSN) that mainly focuses on the use of sensors that communicate through soil. Recent models for the wireless underground communication channel are proposed but few field experiments were realized to verify the accuracy of the models. The realization of field WUSN experiments proved to be extremely complex and time-consuming in comparison with the traditional wireless environment. To the best of our knowledge, this is the first work that proposes guidelines for the development of an outdoor WUSN testbed with the goals of improving the accuracy and reducing of time for WUSN experiments. Although the work mainly aims WUSNs, many of the presented practices can also be applied to generic WSN testbeds."
]
} |
1503.02519 | 1518172056 | Wireless underground sensor networks (WUSNs) can enable many important applications such as intelligent agriculture, pipeline fault diagnosis, mine disaster rescue, concealed border patrol, crude oil exploration, among others. The key challenge to realize WUSNs is the wireless communication in underground environments. Most existing wireless communication systems utilize the dipole antenna to transmit and receive propagating electromagnetic (EM) waves, which do not work well in underground environments due to the very high material absorption loss. The Magnetic Induction (MI) technique provides a promising alternative solution that could address the current problem in underground. Although the MI-based underground communication has been intensively investigated theoretically, to date, seldom effort has been made in developing a testbed for the MI-based underground communication that can validate the theoretical results. In this paper, a testbed of MI-based communication system is designed and implemented in an in-lab underground environment. The testbed realizes and tests not only the original MI mechanism that utilizes single coil but also recent developed techniques that use the MI waveguide and the 3-directional (3D) MI coils. The experiments are conducted in an in-lab underground environment with reconfigurable environmental parameters such as soil composition and water content. This paper provides the principles and guidelines for developing the MI underground communications testbed, which is very complicated and time-consuming due to the new communication mechanism and the new wireless transmission medium. | The MI technique is introduced to wireless underground communication in [8,10,16]. The MI techniques are shown to provide a more reliable underground communication channel. However, MI communication in their original form has a limited communication range due to the high attenuation rate in the near region. The MI waveguide concept is originally developed in [17-22], which is used for artificial delay lines and filters, dielectric mirrors, distributed Bragg reflectors, slow-wave structures in microwave tubes, among others. The channel model for both original MI communication and MI waveguide are maturely developed in recent years [8,10,11,16]. A model of 3D MI communication, is also introduced in @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"2077428143"
],
"abstract": [
"The real-time and in-situ monitoring capability in oil reservoirs is highly desired to increase the current recovery factor of crude oil and natural gas. To this end, the wireless sensor networks (WSNs) are envisioned to be deployed deep inside oil reservoirs to collect and report the physical and chemical information in real time. However, none of the existing wireless communication and networking technologies can support WSNs in oil reservoirs due to the very challenging environment and the extremely small device size. To address the problem, this paper proposes a new self-contained micro wireless sensor network framework based on the Magnetic Induction (MI) technique, which can enable the real-time and in-situ monitoring in oil reservoirs. Rigorous analytical models are developed to characterize the oil reservoir channel for both MI communication and energy transfer, which confirm the feasibility of the proposed self-contained sensor network framework. To enhance the system efficiency and reliability, high-permeability proppants are injected in the hydraulic fracture to increase mutual induction; while the tri-directional MI coil antenna is designed to achieve omnidirectional coverage. The theoretical models and numerical results are validated by the widely used finite element simulation software COMSOL Multiphysics."
]
} |
1503.02519 | 1518172056 | Wireless underground sensor networks (WUSNs) can enable many important applications such as intelligent agriculture, pipeline fault diagnosis, mine disaster rescue, concealed border patrol, crude oil exploration, among others. The key challenge to realize WUSNs is the wireless communication in underground environments. Most existing wireless communication systems utilize the dipole antenna to transmit and receive propagating electromagnetic (EM) waves, which do not work well in underground environments due to the very high material absorption loss. The Magnetic Induction (MI) technique provides a promising alternative solution that could address the current problem in underground. Although the MI-based underground communication has been intensively investigated theoretically, to date, seldom effort has been made in developing a testbed for the MI-based underground communication that can validate the theoretical results. In this paper, a testbed of MI-based communication system is designed and implemented in an in-lab underground environment. The testbed realizes and tests not only the original MI mechanism that utilizes single coil but also recent developed techniques that use the MI waveguide and the 3-directional (3D) MI coils. The experiments are conducted in an in-lab underground environment with reconfigurable environmental parameters such as soil composition and water content. This paper provides the principles and guidelines for developing the MI underground communications testbed, which is very complicated and time-consuming due to the new communication mechanism and the new wireless transmission medium. | Despite the active theoretical research, the implementations and experimental results on MI underground communications are seldom developed. A few implementation of MI techniques has been done in the areas other than communications. In @cite_5 , MI techniques are used to create a charger along the railway that keeps charging the trains. In [17,24-28], two coupled MI coils are used to transfer the electric power between portable wireless devices. In [17,26], the MI waveguide with strong coupled neighbor coils is tested. None of the above implementation and experiments are used to evaluate the MI-based communications. Moreover, those experiments on MI techniques are taken in the air medium. No experiments have been taken to see the MI communication performance in the complicated underground medium. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1968906006"
],
"abstract": [
"In this paper, we will report some recent progress on wireless power transfer (WPT) based on resonant coupling. Two major technologies will be discussed: the use of metamaterials and array of coupled resonators. With a slab of metamaterial, the near-field coupling between two resonant coils can be enhanced; the power transfer efficiency between coils is boosted by the metamaterial. The principle of enhanced coupling with metamaterial will be discussed; the design of metamaterial slabs for near-field wireless power transfer will be shown; recent experimental results on wireless power transfer efficiency improvement with metamaterial will also be presented. By using an array of resonators, the range of efficient power transfer can be greatly extended. More importantly, this new technology can provide wireless power to both static and mobile devices dynamically. The principle of this technology will be explained; analytical and numerical models will be used to evaluate the performance of a WPT system with an array of resonators; recent experimental developments will also be presented."
]
} |
1503.02781 | 1724071807 | A graph is used to represent data in which the relationships between the objects in the data are at least as important as the objects themselves. Over the last two decades nearly a hundred file formats have been proposed or used to provide portable access to such data. This paper seeks to review these formats, and provide some insight to both reduce the ongoing creation of unnecessary formats, and guide the development of new formats where needed. | There is a wide-ranging survey of graph databases @cite_21 , which is more concerned with the underlying database aspects, the relationship between a graph database and other more traditional databases such as a relational database, and the properties of various exemplar graph databases. | {
"cite_N": [
"@cite_21"
],
"mid": [
"2114507260"
],
"abstract": [
"Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints."
]
} |
1503.02781 | 1724071807 | A graph is used to represent data in which the relationships between the objects in the data are at least as important as the objects themselves. Over the last two decades nearly a hundred file formats have been proposed or used to provide portable access to such data. This paper seeks to review these formats, and provide some insight to both reduce the ongoing creation of unnecessary formats, and guide the development of new formats where needed. | One additional paper to consider is @cite_10 , which was written specifically with the view of designing a new, more universal graph format. We deliberately avoid this approach in order to avoid bias in our discussion. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1572439473"
],
"abstract": [
"Prompted by the increasing demand for a standard exchange format for graph data, an informal workshop was held in conjunction with Graph Drawing 2000. The participants identified requirements for such a standard and formed a group to work out a proposal. The current status of this effort is publicly available at http: www.graphdrawing.org data format ."
]
} |
1503.02940 | 2269811247 | Low reliability and availability of public SPARQL endpoints prevent real-world applications from exploiting all the potential of these querying infras-tructures. Fragmenting data on servers can improve data availability but degrades performance. Replicating fragments can offer new tradeoff between performance and availability. We propose FEDRA, a framework for querying Linked Data that takes advantage of client-side data replication, and performs a source selection algorithm that aims to reduce the number of selected public SPARQL endpoints, execution time, and intermediate results. FEDRA has been implemented on the state-of-the-art query engines ANAPSID and FedX, and empirically evaluated on a variety of real-world datasets. | Col-graph @cite_11 enables data consumers to materialize triple pattern fragments and to expose them through SPARQL endpoints to improve data quality. A data consumer can update her local fragments and share updates with data providers and consumers. Col-graph proposes a coordination free protocol to maintain the consistency of replicated fragments. Compared to LDF, Col-graph clearly creates SPARQL endpoints available for other data consumers, and allows federated query engines to use local fragments. As for LDF, can take advantage of these data consumer resources. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2114820226"
],
"abstract": [
"Linked Open Data faces severe issues of scalability, availability and data quality. These issues are observed by data consumers performing federated queries; SPARQL endpoints do not respond and results can be wrong or out-of-date. If a data consumer finds an error, how can she fix it? This raises the issue of the writability of Linked Data. In this paper, we devise an extension of the federation of Linked Data to data consumers. A data consumer can make partial copies of different datasets and make them available through a SPARQL endpoint. A data consumer can update her local copy and share updates with data providers and consumers. Update sharing improves general data quality, and replicated data creates opportunities for federated query engines to improve availability. However, when updates occur in an uncontrolled way, consistency issues arise. In this paper, we define fragments as SPARQL CONSTRUCT federated queries and propose a correction criterion to maintain these fragments incrementally without reevaluating the query. We define a coordination free protocol based on the counting of triples derivations and provenance. We analyze the theoretical complexity of the protocol in time, space and traffic. Experimental results suggest the scalability of our approach to Linked Data."
]
} |
1503.02940 | 2269811247 | Low reliability and availability of public SPARQL endpoints prevent real-world applications from exploiting all the potential of these querying infras-tructures. Fragmenting data on servers can improve data availability but degrades performance. Replicating fragments can offer new tradeoff between performance and availability. We propose FEDRA, a framework for querying Linked Data that takes advantage of client-side data replication, and performs a source selection algorithm that aims to reduce the number of selected public SPARQL endpoints, execution time, and intermediate results. FEDRA has been implemented on the state-of-the-art query engines ANAPSID and FedX, and empirically evaluated on a variety of real-world datasets. | Recently, HiBISCuS @cite_1 a source selection approach has been proposed to reduce the number of selected sources. The reduction is achieved by annotating sources with the URIs authorities they contain, and pruning sources that cannot have triples that match any of the query triple patterns. HiBISCuS differs from our aim of both selecting sources that are required to the answer, and avoiding the selection of sources that only provide redundant replicated fragments. While not directly related to replication, HiBISCuS index could be used in conjunction with to perform join-aware source selection in presence of replicated fragment. | {
"cite_N": [
"@cite_1"
],
"mid": [
"153622350"
],
"abstract": [
"Efficient federated query processing is of significant importance to tame the large amount of data available on the Web of Data. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. This work presents HiBISCuS, a novel hypergraph-based source selection approach to federated SPARQL querying. Our approach can be directly combined with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We extend three well-known SPARQL query federation engines with HiBISCus and compare our extensions with the original approaches on FedBench. Our evaluation shows that HiBISCuS can efficiently reduce the total number of sources selected without losing recall. Moreover, our approach significantly reduces the execution time of the selected engines on most of the benchmark queries."
]
} |
1503.02940 | 2269811247 | Low reliability and availability of public SPARQL endpoints prevent real-world applications from exploiting all the potential of these querying infras-tructures. Fragmenting data on servers can improve data availability but degrades performance. Replicating fragments can offer new tradeoff between performance and availability. We propose FEDRA, a framework for querying Linked Data that takes advantage of client-side data replication, and performs a source selection algorithm that aims to reduce the number of selected public SPARQL endpoints, execution time, and intermediate results. FEDRA has been implemented on the state-of-the-art query engines ANAPSID and FedX, and empirically evaluated on a variety of real-world datasets. | Existing federated query engines @cite_12 @cite_14 @cite_0 @cite_9 @cite_3 are not able to take advantage of replicated fragments, and data overlapping can seriously degrade their performance as reported in Figure and shown in @cite_19 @cite_5 . We integrated within FedX and ANAPSID to make existing engines aware of replicated fragments. With , replications as in Figure will be detected, and performance will remain stable. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_12"
],
"mid": [
"1808100928",
"1555486317",
"1484056211",
"2265585838",
"2408716304",
"",
"2248646379"
],
"abstract": [
"Traditionally Semantic Web applications either included a web crawler or relied on external services to gain access to the Web of Data. Recent efforts, have enabled applications to query the entire Semantic Web for up-to-date results. Such approaches are based on either centralized indexing of semantically annotated metadata or link traversal and URI dereferencing as in the case of Linked Open Data. They pose a number of limiting assumptions, thus breaking the openness principle of the Web. In this demo we present a novel technique called Avalanche, designed to allow a data surfer to query the Semantic Web transparently. The technique makes no prior assumptions about data distribution. Specifically, Avalanche can perform \"live\" queries over the Web of Data. First, it gets on-line statistical information about the data distribution, as well as bandwidth availability. Then, it plans and executes the query in a distributed manner trying to quickly provide first answers.",
"Integrated access to multiple distributed and autonomous RDF data sources is a key challenge for many semantic web applications. As a reaction to this challenge, SPARQL, the W3C Recommendation for an RDF query language, supports querying of multiple RDF graphs. However, the current standard does not provide transparent query federation, which makes query formulation hard and lengthy. Furthermore, current implementations of SPARQL load all RDF graphs mentioned in a query to the local machine. This usually incurs a large overhead in network traffic, and sometimes is simply impossible for technical or legal reasons. To overcome these problems we present DARQ, an engine for federated SPARQL queries. DARQ provides transparent query access to multiple SPARQL services, i.e., it gives the user the impression to query one single RDF graph despite the real data being distributed on the web. A service description language enables the query engine to decompose a query into sub-queries, each of which can be answered by an individual service. DARQ also uses query rewriting and cost-based query optimization to speed-up query execution. Experiments show that these optimizations significantly improve query performance even when only a very limited amount of statistical information is available. DARQ is available under GPL License at http: darq.sf.net .",
"Motivated by the ongoing success of Linked Data and the growing amount of semantic data sources available on theWeb, new challenges to query processing are emerging. Especially in distributed settings that require joining data provided by multiple sources, sophisticated optimization techniques are necessary for efficient query processing. We propose novel join processing and grouping techniques to minimize the number of remote requests, and develop an effective solution for source selection in the absence of preprocessed metadata. We present FedX, a practical framework that enables efficient SPARQL query processing on heterogeneous, virtually integrated Linked Data sources. In experiments, we demonstrate the practicability and efficiency of our framework on a set of real-world queries and data sources from the Linked Open Data cloud. With FedX we achieve a significant improvement in query performance over state-of-the-art federated query engines.",
"In order to leverage the full potential of the Semantic Web it is necessary to transparently query distributed RDF data sources in the same way as it has been possible with federated databases for ages. However, there are significant differences between the Web of (linked) Data and the traditional database approaches. Hence, it is not straightforward to adapt successful database techniques for RDF federation. Reasons are the missing cooperation between SPARQL end-points and the need for detailed data statistics for estimating the costs of query execution plans. We have implemented SPLENDID, a query optimization strategy for federating SPARQL endpoints based on statistical data obtained from voiD descriptions.",
"Data replication and deployment of local SPARQL endpoints improve scalability and availability of public SPARQL endpoints, making the consumption of Linked Data a reality. This solution requires synchronization and specific query processing strategies to take advantage of replication. However, existing replication aware techniques in federations of SPARQL endpoints do not consider data dynamicity. We propose Fedra, an approach for querying federations of endpoints that benefits from replication. Participants in Fedra federations can copy fragments of data from several datasets, and describe them using provenance and views. These descriptions enable Fedra to reduce the number of selected endpoints while satisfying user divergence requirements. Experiments on real-world datasets suggest savings of up to three orders of magnitude.",
"",
"Following the design rules of Linked Data, the number of available SPARQL endpoints that support remote query processing is quickly growing; however, because of the lack of adaptivity, query executions may frequently be unsuccessful. First, fixed plans identified following the traditional optimize-thenexecute paradigm, may timeout as a consequence of endpoint availability. Second, because blocking operators are usually implemented, endpoint query engines are not able to incrementally produce results, and may become blocked if data sources stop sending data. We present ANAPSID, an adaptive query engine for SPARQL endpoints that adapts query execution schedulers to data availability and run-time conditions. ANAPSID provides physical SPARQL operators that detect when a source becomes blocked or data traffic is bursty, and opportunistically, the operators produce results as quickly as data arrives from the sources. Additionally, ANAPSID operators implement main memory replacement policies to move previously computed matches to secondary memory avoiding duplicates. We compared ANAPSID performance with respect to RDF stores and endpoints, and observed that ANAPSID speeds up execution time, in some cases, in more than one order of magnitude."
]
} |
1503.02828 | 1591995413 | Nuclear-norm regularization plays a vital role in many learning tasks, such as low-rank matrix recovery (MR), and low-rank representation (LRR). Solving this problem directly can be computationally expensive due to the unknown rank of variables or large-rank singular value decompositions (SVDs). To address this, we propose a proximal Riemannian gradient (PRG) scheme which can efficiently solve trace-norm regularized problems defined on real-algebraic variety @math of real matrices of rank at most @math . Based on PRG, we further present a simple and novel subspace pursuit (SP) paradigm for general trace-norm regularized problems without the explicit rank constraint @math . The proposed paradigm is very scalable by avoiding large-rank SVDs. Empirical studies on several tasks, such as matrix completion and LRR based subspace clustering, demonstrate the superiority of the proposed paradigms over existing methods. | The authors in @cite_42 exploited Riemannian structures and presented a trust-region algorithm to address trace-norm minimizations. The proposed method, denoted by MMBS, alternates between fixed-rank optimization and rank-one updates. However, this method shows slower speed than APG on large-scale problems @cite_42 . The authors in @cite_5 proposed a Grassmannian manifold method to address trace-norm minimizations on a fixed-rank manifold. In general, this method has similar complexity to ScGrassMC that also operates on Grassmannian manifold @cite_0 | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_42"
],
"mid": [
"2098670069",
"2395625855",
"1998367754"
],
"abstract": [
"This paper describes gradient methods based on a scaled metric on the Grassmann manifold for low-rank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on ill-conditioned matrices, while maintaining established global convegence and exact recovery guarantees. A connection between a form of subspace iteration for matrix completion and the scaled gradient descent procedure is also established. The proposed conjugate gradient method based on the scaled gradient outperforms several existing algorithms for matrix completion and is competitive with recently proposed methods.",
"This paper aims to address a class of nuclear norm regularized least square (NNLS) problems. By exploiting the underlying low-rank matrix manifold structure, the problem with nuclear norm regularization is cast to a Riemannian optimization problem over matrix manifolds. Compared with existing NNLS algorithms involving singular value decomposition (SVD) of large-scale matrices, our method achieves significant reduction in computational complexity. Moreover, the uniqueness of matrix factorization can be guaranteed by our Grassmannian manifold method. In our solution, we first introduce the bilateral factorization into the original NNLS problem and convert it into a Grassmannian optimization problem by using a linearized technique. Then the conjugate gradient procedure on the Grassmannian manifold is developed for our method with a guarantee of local convergence. Finally, our method can be extended to address the graph regularized problem. Experimental results verified both the efficiency and effectiveness of our method.",
"The paper addresses the problem of low-rank trace norm minimization. We propose an algorithm that alternates between fixed-rank optimization and rank-one updates. The fixed-rank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the search space and the computation of duality gap numerically tractable. The search space is nonlinear but is equipped with a Riemannian structure that leads to efficient computations. We present a second-order trust-region algorithm with a guaranteed quadratic rate of convergence. Overall, the proposed optimization scheme converges superlinearly to the global solution while maintaining complexity that is linear in the number of rows and columns of the matrix. To compute a set of solutions efficiently for a grid of regularization parameters we propose a predictor-corrector approach that outperforms the naive warm-restart approach on the fixed-rank quotient manifold. The performance of the proposed algorithm is illustrated on p..."
]
} |
1503.02828 | 1591995413 | Nuclear-norm regularization plays a vital role in many learning tasks, such as low-rank matrix recovery (MR), and low-rank representation (LRR). Solving this problem directly can be computationally expensive due to the unknown rank of variables or large-rank singular value decompositions (SVDs). To address this, we propose a proximal Riemannian gradient (PRG) scheme which can efficiently solve trace-norm regularized problems defined on real-algebraic variety @math of real matrices of rank at most @math . Based on PRG, we further present a simple and novel subspace pursuit (SP) paradigm for general trace-norm regularized problems without the explicit rank constraint @math . The proposed paradigm is very scalable by avoiding large-rank SVDs. Empirical studies on several tasks, such as matrix completion and LRR based subspace clustering, demonstrate the superiority of the proposed paradigms over existing methods. | Active subspace methods or greedy methods, that increase the rank by one per iteration, have gained great attention in recent years @cite_13 @cite_28 @cite_46 @cite_1 . However, these methods usually involve expensive subproblems, and might be very expensive when the true rank is high. For example, Laue's method @cite_1 needs to solve nonlinear master problems using the BFGS method , which can be very expensive for large-scale problems. More recently, proposed a novel active subspace selection method for solving trace-norm regularized problems. However, this method may suffer from slow convergence speed due to the approximated SVDs and inefficient solvers for the subproblem optimization. In , the authors proposed a Riemanian pursuit (RP) algorithm which increases the rank more than one. However, this algorithm cannot deal with trace-norm regularized problems. | {
"cite_N": [
"@cite_28",
"@cite_46",
"@cite_1",
"@cite_13"
],
"mid": [
"1574851760",
"2103325283",
"2962693888",
"1775587472"
],
"abstract": [
"Optimization problems with a nuclear norm regularization, such as e.g. low norm matrix factorizations, have seen many applications recently. We propose a new approximation algorithm building upon the recent sparse approximate SDP solver of (Hazan, 2008). The experimental efficiency of our method is demonstrated on large matrix completion problems such as the Netflix dataset. The algorithm comes with strong convergence guarantees, and can be interpreted as a first theoretically justified variant of Simon-Funk-type SVD heuristics. The method is free of tuning parameters, and very easy to parallelize.",
"We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) finding the left and right singular vectors corresponding to the largest singular value of a certain matrix, which can be calculated in linear time. This leads to an algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation.",
"We present a hybrid algorithm for optimizing a convex, smooth function over the cone of positive semidefinite matrices. Our algorithm converges to the global optimal solution and can be used to solve general large-scale semidefinite programs and hence can be readily applied to a variety of machine learning problems. We show experimental results on three machine learning problems. Our approach outperforms state-of-the-art algorithms.",
"We propose an algorithm for approximately maximizing a concave function over the bounded semi-definite cone, which produces sparse solutions. Sparsity for SDP corresponds to low rank matrices, and is a important property for both computational as well as learning theoretic reasons. As an application, building on Aaronson's recent work, we derive a linear time algorithm for Quantum State Tomography."
]
} |
1503.02217 | 2198291345 | It was recently conjectured that the permanent of a @math -lifting @math of a matrix @math of degree @math is less than or equal to the @math th power of the permanent perm @math , i.e., perm @math and, consequently, that the degree- @math Bethe permanent @math of a matrix @math is less than or equal to the permanent perm @math of @math , i.e., perm @math . In this paper, we prove these related conjectures and show in addition a few properties of the permanent of block matrices that are lifts of a matrix. As a corollary, we obtain an alternative proof of the inequality perm @math on the Bethe permanent of the base matrix @math that uses only the combinatorial definition of the Bethe permanent. | The literature on permanents and on adjacent areas (of counting perfect matchings, counting 0-1 matrices with specified row and column sums, etc.) is vast. Apart from the previously mentioned papers, the most relevant papers to our work are the one by Chertkov & Yedidia @cite_12 that studies the so-called fractional free energy functionals and resulting lower and upper bounds on the permanent of a non-negative matrix, the papers @cite_7 (on counting perfect matchings in random graph covers), @cite_1 (on counting matchings in graphs with the help of the sum-product algorithm Computing the permanent is related to counting perfect matchings. ), and @cite_18 @cite_32 @cite_21 (on max-product min-sum algorithms based approaches to the maximum weight perfect matching problem). Relevant is also the line of work on approximating the permanent of a non-negative matrix using Markov-chain-Monte-Carlo-based methods @cite_6 , polynomial-time randomized approximation schemes @cite_0 , and Bethe-approximation based methods or sum-product-algorithm (SPA) based method @cite_18 @cite_22 . See @cite_27 for a more detailed account of these and other related papers. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_12"
],
"mid": [
"1628027163",
"",
"2167490374",
"",
"1986705527",
"",
"2044105411",
"2161611531",
"2761017552",
"1520400698"
],
"abstract": [
"This work describes a method of approximating matrix per- manents efficiently using belief propagation. We formulate aprobability distribution whose partition function is exactly the permanent, then use Bethe free energy to approximate this partition function. After deriving some speedups to standard belief propagation, the resulting algorithm requires (n 2 ) time per iteration. Finally, we demonstrate the advantages of using this approximation.",
"",
"Let G be a fixed connected multigraph with no loops. A random n-lift of G is obtained by replacing each vertex of G by a set of n vertices (where these sets are pairwise disjoint) and replacing each edge by a randomly chosen perfect matching between the n-sets corresponding to the endpoints of the edge. Let XG be the number of perfect matchings in a random lift of G. We study the distribution of XG in the limit as n tends to infinity, using the small subgraph conditioning method. We present several results including an asymptotic formula for the expectation of XG when G is d-regular, d ≥ 3. The interaction of perfect matchings with short cycles in random lifts of regular multigraphs is also analysed. Partial calculations are performed for the second moment of XG, with full details given for two example multigraphs, including the complete graph K4. To assist in our calculations we provide a theorem for estimating a summation over multiple dimensions using Laplace's method. This result is phrased as a summation over lattice points, and may prove useful in future applications.",
"",
"In this paper we rigorously prove the validity of the cavity method for the problem of counting the number of matchings in graphs with large girth. Cavity method is an important heuristic developed by statistical physicists that has lead to the development of faster distributed algorithms for problems in various combinatorial optimization problems. The validity of the approach has been supported mostly by numerical simulations. In this paper we prove the validity of cavity method for the problem of counting matchings using rigorous techniques. We hope that these rigorous approaches will finally help us establish the validity of the cavity method in general.",
"",
"Let G=(U, V, E) be a bipartite graph with |U|=|V|=n. The factor size of G, f, is the maximum number of edge disjoint perfect matchings in G. We characterize the complexity of counting the number of perfect matchings in classes of graphs parameterized by factor size. We describe the simple algorithm, which is an approximation algorithm for the permanent that is a natural simplification of the algorithm suggested by Broder (1986) and analyzed by Jerrum and Sinclair (1988a, b). Compared to the algorithm by Jerrum and Sinclair (1988a, b), the simple algorithm achieves a polynomial speed up in the running time to compute the permanent. A combinatorial lemma is used to prove that the simple algorithm runs in time nO(n f). Thus: (1) for all constants α>0, the simple algorithm runs in polynomial time for graphs with factor size at least αn; (2) for some constant c, the simple algorithm is the fastest known approximation for graphs with factor size at least c log n. (Compare with the approximation algorithms described in Karmarkar (1988).) We prove the following complementary hardness results. For functions f such that 3⩽f(n)⩽n−3, the exact counting problem for f(n)-regular bipartite graphs is #P-complete. For and e>0, for any function f such that 3⩽f(n)⩽n1−e, approximate counting for f(n)-regular bipartite graphs is as hard as approximate counting for all bipartite graphs. An announcement of these results appears in Dagum (1988).",
"We present a polynomial-time randomized algorithm for estimating the permanent of an arbitrary n × n matrix with nonnegative entries. This algorithm---technically a \"fully-polynomial randomized approximation scheme\"---computes an approximation that is, with high probability, within arbitrarily small specified relative error of the true value of the permanent.",
"It has recently been observed that the permanent of a nonnegative square matrix, i.e., of a square matrix containing only nonnegative real entries, can very well be approximated by solving a certain Bethe free energy function minimization problem with the help of the sum-product algorithm. We call the resulting approximation of the permanent the Bethe permanent. In this paper, we give reasons why this approach to approximating the permanent works well. Namely, we show that the Bethe free energy function is convex and that the sum-product algorithm finds its minimum efficiently. We then discuss the fact that the permanent is lower bounded by the Bethe permanent, and we comment on potential upper bounds on the permanent based on the Bethe permanent. We also present a combinatorial characterization of the Bethe permanent in terms of permanents of so-called lifted versions of the matrix under consideration. Moreover, we comment on possibilities to modify the Bethe permanent so that it approximates the permanent even better, and we conclude the paper with some observations and conjectures about permanent-based pseudocodewords and permanent-based kernels.",
"We discuss schemes for exact and approximate computations of permanents, and compare them with each other. Specifically, we analyze and generalize the Belief Propagation (BP) approach to computing the permanent of a non-negative matrix. Known bounds and conjectures are verified in experiments, and some new theoretical relations, bounds and conjectures are proposed. We introduce a fractional free energy functional parameterized by a scalar parameter @math , where @math corresponds to the BP limit and @math corresponds to the exclusion principle Mean-Field (MF) limit, and show monotonicity and continuity of the functional with @math . We observe that the optimal value of @math , where the @math -parameterized functional is equal to the exact free energy (defined as the minus log of the permanent), lies in the @math range, with the low and high values from the range producing provable low and upper bounds for the permanent. Our experimental analysis suggests that the optimal @math varies for different ensembles considered but it always lies in the @math interval. Besides, for all ensembles considered the behavior of the optimal @math is highly distinctive, thus offering a lot of practical potential for estimating permanents of non-negative matrices via the fractional free energy functional approach."
]
} |
1503.02284 | 1487492660 | We consider extensions of Hoeffding's " exponential method " approach for obtaining upper estimates on the probability that a sum of independent and bounded random variables is significantly larger than its mean. We show that the exponential function in Hoeffding's approach can be replaced with any function which is non-negative, increasing and convex. As a result we generalize and improve upon Hoeffding's inequality. Our approach allows to obtain " missing factors " in Hoeffding's inequality. The later result is a rather weaker version of a theorem that is due to Michel Talagrand. Moreover, we characterize the class of functions with respect to which our method yields optimal concentration bounds. Finally, using ideas from the theory of Bernstein polynomials, we show that similar ideas apply under information on higher moments of the random variables. | There exists quite some work dedicated to improving Hoeffding's bound. See for example the work of Bentkus @cite_25 , Pinelis @cite_10 , Siegel @cite_23 and Talagrand @cite_26 , just to name a few references. Let us bring the reader's to attention the following two results that are extracted from the papers of Talagrand ( @cite_26 , Theorem @math ) and Bentkus ( @cite_25 , Theorem @math ). Talagrand's paper focuses on obtaining some "missing" factors in Hoeffding's inequality whose existence is motivated by the Central Limit Theorem (see @cite_26 , Section @math ). These factors are obtained by combining the Bernstein-Hoeffding method together with a technique (i.e. suitable change of measure) that is used in the proof of Cram 'er's theorem on large deviations, yielding the following. | {
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_25",
"@cite_23"
],
"mid": [
"2479913478",
"2029060922",
"2037982184",
"1493462108"
],
"abstract": [
"Un article celebre d'Hoeffding etablit des bornes pour la deviation d'une somme de variables aleatoires independantes par rapport a sa moyenne. Combinant la methode d'Hoeffding avec la transformation d'Esscher, on montre comment sous des hypotheses supplementaires minimes ces bornes peuvent etre ameliorees d'une facon essentiellement optimale.",
"",
"In a celebrated work by Hoeffding [J. Amer. Statist. Assoc. 58 (1963) 13-30], several inequalities for tail probabilities of sums M n = X 1 + ... + X n of bounded independent random variables X j were proved. These inequalities had a considerable impact on the development of probability and statistics, and remained unimproved until 1995 when Talagrand [Inst. Hautes Etudes Sci. Publ. Math. 81 (1995a) 73-205] inserted certain missing factors in the bounds of two theorems. By similar factors, a third theorem was refined by Pinelis [Progress in Probability 43 (1998) 257-314] and refined (and extended) by me. In this article, I introduce a new type of inequality. Namely, I show that P M n ≥ x ≤ cP S n ≥ x , where c is an absolute constant and S n = e 1 + ... + e n is a sum of independent identically distributed Bernoulli random variables (a random variable is called Bernoulli if it assumes at most two values). The inequality holds for those x ∈ Ρ where the survival function x → P S n ≥ x has a jump down. For the remaining x the inequality still holds provided that the function between the adjacent jump points is interpolated linearly or log-linearly. If it is necessary, to estimate P S n ≥ x special bounds can be used for binomial probabilities. The results extend to martingales with bounded differences. It is apparent that Theorem 1.1 of this article is the most important. The inequalities have applications to measure concentration, leading to results of the type where, up to an absolute constant, the measure concentration is dominated by the concentration in a simplest appropriate model, such results will be considered elsewhere.",
"Let X be a sum of real valued random variables and have a bounded mean E[X]. The generic Chernoff-Hoeffding estimate for large deviations of X is: P X-E[X]<=a =0 exp(-y(a+E[X]))E[exp(y X)], which applies with a<=0 to random variables with very small tails. At issue is how to use this method to attain sharp and useful estimates. We present a number of Chernoff-Hoeffding bounds for sums of random variables that may have a variety of dependent relationships and that may be heterogeneously distributed."
]
} |
1503.02284 | 1487492660 | We consider extensions of Hoeffding's " exponential method " approach for obtaining upper estimates on the probability that a sum of independent and bounded random variables is significantly larger than its mean. We show that the exponential function in Hoeffding's approach can be replaced with any function which is non-negative, increasing and convex. As a result we generalize and improve upon Hoeffding's inequality. Our approach allows to obtain " missing factors " in Hoeffding's inequality. The later result is a rather weaker version of a theorem that is due to Michel Talagrand. Moreover, we characterize the class of functions with respect to which our method yields optimal concentration bounds. Finally, using ideas from the theory of Bernstein polynomials, we show that similar ideas apply under information on higher moments of the random variables. | See @cite_26 for a proof of this theorem and for a precise definition of the function @math . In other words, Talagrand's result improves upon Hoeffding's by inserting a "missing" factor of order @math in the Hoeffding bound. Notice that Talagrand's result holds true for @math , for some absolute constant @math whose value does seem to be known. Talagrand (see @cite_26 , page 692) mentions that one can obtain a rather small numerical value for @math , but numerical computations are left to others with the talent for it. One of the purposes of this paper is to improve upon Hoeffding's inequality by obtaining "missing" factors with exact numerical values for the constants. Part of Bentkus' paper performs comparisons between @math and tails of binomial and Poisson random variables. A crucial idea in the results of @cite_25 is to compare @math with means of particular functions of certain random variables. In particular, in the proof of Theorem @math in @cite_25 one can find the following result. | {
"cite_N": [
"@cite_26",
"@cite_25"
],
"mid": [
"2479913478",
"2037982184"
],
"abstract": [
"Un article celebre d'Hoeffding etablit des bornes pour la deviation d'une somme de variables aleatoires independantes par rapport a sa moyenne. Combinant la methode d'Hoeffding avec la transformation d'Esscher, on montre comment sous des hypotheses supplementaires minimes ces bornes peuvent etre ameliorees d'une facon essentiellement optimale.",
"In a celebrated work by Hoeffding [J. Amer. Statist. Assoc. 58 (1963) 13-30], several inequalities for tail probabilities of sums M n = X 1 + ... + X n of bounded independent random variables X j were proved. These inequalities had a considerable impact on the development of probability and statistics, and remained unimproved until 1995 when Talagrand [Inst. Hautes Etudes Sci. Publ. Math. 81 (1995a) 73-205] inserted certain missing factors in the bounds of two theorems. By similar factors, a third theorem was refined by Pinelis [Progress in Probability 43 (1998) 257-314] and refined (and extended) by me. In this article, I introduce a new type of inequality. Namely, I show that P M n ≥ x ≤ cP S n ≥ x , where c is an absolute constant and S n = e 1 + ... + e n is a sum of independent identically distributed Bernoulli random variables (a random variable is called Bernoulli if it assumes at most two values). The inequality holds for those x ∈ Ρ where the survival function x → P S n ≥ x has a jump down. For the remaining x the inequality still holds provided that the function between the adjacent jump points is interpolated linearly or log-linearly. If it is necessary, to estimate P S n ≥ x special bounds can be used for binomial probabilities. The results extend to martingales with bounded differences. It is apparent that Theorem 1.1 of this article is the most important. The inequalities have applications to measure concentration, leading to results of the type where, up to an absolute constant, the measure concentration is dominated by the concentration in a simplest appropriate model, such results will be considered elsewhere."
]
} |
1503.02284 | 1487492660 | We consider extensions of Hoeffding's " exponential method " approach for obtaining upper estimates on the probability that a sum of independent and bounded random variables is significantly larger than its mean. We show that the exponential function in Hoeffding's approach can be replaced with any function which is non-negative, increasing and convex. As a result we generalize and improve upon Hoeffding's inequality. Our approach allows to obtain " missing factors " in Hoeffding's inequality. The later result is a rather weaker version of a theorem that is due to Michel Talagrand. Moreover, we characterize the class of functions with respect to which our method yields optimal concentration bounds. Finally, using ideas from the theory of Bernstein polynomials, we show that similar ideas apply under information on higher moments of the random variables. | The quantity on the right hand side of the first inequality is estimated in @cite_25 , Lemma @math . We will see in the forthcoming sections that first statement of Bentkus' result is optimal in a slightly broader sense, i.e., it is the best bound that can be obtained from the inequality [ P [ i=1 ^ n X_i t ] 1 f(t) E [f(B)] , ] where @math is a non-negative, convex and increasing function. Additionally, we will improve upon the constant @math of the second statement. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2037982184"
],
"abstract": [
"In a celebrated work by Hoeffding [J. Amer. Statist. Assoc. 58 (1963) 13-30], several inequalities for tail probabilities of sums M n = X 1 + ... + X n of bounded independent random variables X j were proved. These inequalities had a considerable impact on the development of probability and statistics, and remained unimproved until 1995 when Talagrand [Inst. Hautes Etudes Sci. Publ. Math. 81 (1995a) 73-205] inserted certain missing factors in the bounds of two theorems. By similar factors, a third theorem was refined by Pinelis [Progress in Probability 43 (1998) 257-314] and refined (and extended) by me. In this article, I introduce a new type of inequality. Namely, I show that P M n ≥ x ≤ cP S n ≥ x , where c is an absolute constant and S n = e 1 + ... + e n is a sum of independent identically distributed Bernoulli random variables (a random variable is called Bernoulli if it assumes at most two values). The inequality holds for those x ∈ Ρ where the survival function x → P S n ≥ x has a jump down. For the remaining x the inequality still holds provided that the function between the adjacent jump points is interpolated linearly or log-linearly. If it is necessary, to estimate P S n ≥ x special bounds can be used for binomial probabilities. The results extend to martingales with bounded differences. It is apparent that Theorem 1.1 of this article is the most important. The inequalities have applications to measure concentration, leading to results of the type where, up to an absolute constant, the measure concentration is dominated by the concentration in a simplest appropriate model, such results will be considered elsewhere."
]
} |
1503.02318 | 1934250397 | Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit1 and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10 accuracy (SVM+Deep Relative Attributes) -better than humans at 60.12 . Finally, we study how human prediction of image virality varies with different “contexts” in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available. | Most existing works @cite_11 @cite_24 @cite_29 study how people share content on social networking sites it has been posted. They use the network dynamics soon after the content has been posted to detect an oncoming snowballing effect and predict whether the content will go viral or not. We argue that predicting virality after the content has already been posted is too late in some applications. It is not feasible for graphics designers to try out'' various designs to see if they become viral or not. In this paper, we are interested in understanding the relations between the content itself (even before it is posted online) and its potential to be viral In fact, if the machine understands what makes an image viral, one could use machine teaching'' @cite_36 to train humans (e.g., novice graphic designers) what viral images look like. . | {
"cite_N": [
"@cite_24",
"@cite_29",
"@cite_36",
"@cite_11"
],
"mid": [
"1972309850",
"2026212862",
"1941636933",
"1994473607"
],
"abstract": [
"What determines the timing of human actions? A big question, but the science of human dynamics is here to tackle it. And its predictions are of practical value: for example, when ISPs decide what bandwidth an institution needs, they use a model of the likely timing and activity level of the individuals. Current models assume that an individual has a well defined probability of engaging in a specific action at a given moment, but evidence that the timing of human actions does not follow this pattern (of Poisson statistics) is emerging. Instead the delay between two consecutive events is best described by a heavy-tailed (power law) distribution. Albert-Laszlo Barabasi proposes an explanation for the prevalence of this behaviour. The ‘bursty’ nature of human dynamics, he finds, is a fundamental consequence of decision making.",
"In a “tipping” model, each node in a social network, representing an individual, adopts a property or behavior if a certain number of his incoming neighbors currently exhibit the same. In viral marketing, a key problem is to select an initial “seed” set from the network such that the entire network adopts any behavior given to the seed. Here we introduce a method for quickly finding seed sets that scales to very large networks. Our approach finds a set of nodes that guarantees spreading to the entire network under the tipping model. After experimentally evaluating 31 real-world networks, we found that our approach often finds seed sets that are several orders of magnitude smaller than the population size and outperform nodal centrality measures in most cases. In addition, our approach scales well—on a Friendster social network consisting of 5.6 million nodes and 28 million edges we found a seed set in under 3.6 h. Our experiments also indicate that our algorithm provides small seed sets even if high-degree nodes are removed. Last, we find that highly clustered local neighborhoods, together with dense network-wide community structures, suppress a trend’s ability to spread under the tipping model.",
"Compared to machines, humans are extremely good at classifying images into categories, especially when they possess prior knowledge of the categories at hand. If this prior information is not available, supervision in the form of teaching images is required. To learn categories more quickly, people should see important and representative images first, followed by less important images later - or not at all. However, image-importance is individual-specific, i.e. a teaching image is important to a student if it changes their overall ability to discriminate between classes. Further, students keep learning, so while image-importance depends on their current knowledge, it also varies with time. In this work we propose an Interactive Machine Teaching algorithm that enables a computer to teach challenging visual concepts to a human. Our adaptive algorithm chooses, online, which labeled images from a teaching set should be shown to the student as they learn. We show that a teaching strategy that probabilistically models the student's ability and progress, based on their correct and incorrect answers, produces better ‘experts’. We present results using real human participants across several varied and challenging real-world datasets.",
"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective."
]
} |
1503.02318 | 1934250397 | Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit1 and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10 accuracy (SVM+Deep Relative Attributes) -better than humans at 60.12 . Finally, we study how human prediction of image virality varies with different “contexts” in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available. | There exist several qualitative theories of the kinds of content that are likely to go viral @cite_3 @cite_27 . Only a few works have quantitatively analyzed content, for instance Tweets @cite_33 and New York Times articles @cite_28 to predict their virality. However, in spite of them being a large part of our online experience, the connections between content in visual media and their virality has not been analyzed. This forms the focus of our work. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_33",
"@cite_3"
],
"mid": [
"2964586292",
"2761807801",
"2026318959",
"1977614311"
],
"abstract": [
"Why are certain pieces of online content (e.g., advertisements, videos, news articles) more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique data set of all the New York Times articles published over a three-month period, the authors examine how emotion shapes virality. The results indicate that positive content is more viral than negative content, but the relationship between emotion and social transmission is more complex than valence alone. Virality is partially driven by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low-arousal, or deactivating, emotions (e.g., sadness) is less viral. These results hold even when the authors control for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental re...",
"",
"Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating information in the Twitter social network. Even though a lot of information is shared in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user's tweet. We believe that this research would inform the design of sensemaking and analytics tools for social media streams.",
"Social transmission is everywhere. Friends talk about restaurants, policy wonks rant about legislation, analysts trade stock tips, neighbors gossip, and teens chitchat. Further, such interpersonal communication affects everything from decision making and well-being (Asch, 1956; Mehl, Vazire, Holleran, & Clark, 2010) to the spread of ideas, the persistence of stereotypes, and the diffusion of culture (Heath, 1996; Heath, Bell, & Sternberg, 2001; Kashima, 2008; Schaller, Conway, & Tanchuk, 2002; Schaller & Crandall, 2004). But although it is clear that social transmission is both frequent and important, what drives people to share, and why are some stories and information shared more than others?"
]
} |
1503.02318 | 1934250397 | Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit1 and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10 accuracy (SVM+Deep Relative Attributes) -better than humans at 60.12 . Finally, we study how human prediction of image virality varies with different “contexts” in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available. | Virality of text data such as Tweets has been studied in @cite_10 @cite_33 . The diffusion properties were found to be dependent on their content and features like embedded URL's and hashtags. Generally, diffusion of content over networks has been studied more than the causes @cite_29 . The work of Leskovec al @cite_11 models propagation of recommendations over a network of individuals through a stochastic model, while Beutel al @cite_19 approach viral diffusion as an epidemiological problem. | {
"cite_N": [
"@cite_33",
"@cite_29",
"@cite_19",
"@cite_10",
"@cite_11"
],
"mid": [
"2026318959",
"2026212862",
"2171031021",
"2113135969",
"1994473607"
],
"abstract": [
"Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating information in the Twitter social network. Even though a lot of information is shared in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user's tweet. We believe that this research would inform the design of sensemaking and analytics tools for social media streams.",
"In a “tipping” model, each node in a social network, representing an individual, adopts a property or behavior if a certain number of his incoming neighbors currently exhibit the same. In viral marketing, a key problem is to select an initial “seed” set from the network such that the entire network adopts any behavior given to the seed. Here we introduce a method for quickly finding seed sets that scales to very large networks. Our approach finds a set of nodes that guarantees spreading to the entire network under the tipping model. After experimentally evaluating 31 real-world networks, we found that our approach often finds seed sets that are several orders of magnitude smaller than the population size and outperform nodal centrality measures in most cases. In addition, our approach scales well—on a Friendster social network consisting of 5.6 million nodes and 28 million edges we found a seed set in under 3.6 h. Our experiments also indicate that our algorithm provides small seed sets even if high-degree nodes are removed. Last, we find that highly clustered local neighborhoods, together with dense network-wide community structures, suppress a trend’s ability to spread under the tipping model.",
"Suppose we have two competing ideas products viruses, that propagate over a social or other network. Suppose that they are strong virulent enough, so that each, if left alone, could lead to an epidemic. What will happen when both operate on the network? Earlier models assume that there is perfect competition: if a user buys product 'A' (or gets infected with virus 'X'), she will never buy product 'B' (or virus 'Y'). This is not always true: for example, a user could install and use both Firefox and Google Chrome as browsers. Similarly, one type of flu may give partial immunity against some other similar disease. In the case of full competition, it is known that 'winner takes all,' that is the weaker virus product will become extinct. In the case of no competition, both viruses survive, ignoring each other. What happens in-between these two extremes? We show that there is a phase transition: if the competition is harsher than a critical level, then 'winner takes all;' otherwise, the weaker virus survives. These are the contributions of this paper (a) the problem definition, which is novel even in epidemiology literature (b) the phase-transition result and (c) experiments on real data, illustrating the suitability of our results.",
"This work contributes to the study of retweet behavior on Twitter surrounding real-world events. We analyze over a million tweets pertaining to three events, present general tweet properties in such topical datasets and qualitatively analyze the properties of the retweet behavior surrounding the most tweeted viral content pieces. Findings include a clear relationship between sparse dense retweet patterns and the content and type of a tweet itself; suggesting the need to study content properties in link-based diffusion models.",
"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective."
]
} |
1503.02318 | 1934250397 | Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit1 and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10 accuracy (SVM+Deep Relative Attributes) -better than humans at 60.12 . Finally, we study how human prediction of image virality varies with different “contexts” in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available. | Qualitative theories about what makes people share content have been proposed in marketing research. Berger al @cite_3 @cite_28 @cite_27 for instance postulate a set of STEPPS that suggests that social currency, triggers, ease of emotion, public (publicity), practical value, and stories make people share. | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_3"
],
"mid": [
"2964586292",
"2761807801",
"1977614311"
],
"abstract": [
"Why are certain pieces of online content (e.g., advertisements, videos, news articles) more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique data set of all the New York Times articles published over a three-month period, the authors examine how emotion shapes virality. The results indicate that positive content is more viral than negative content, but the relationship between emotion and social transmission is more complex than valence alone. Virality is partially driven by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low-arousal, or deactivating, emotions (e.g., sadness) is less viral. These results hold even when the authors control for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental re...",
"",
"Social transmission is everywhere. Friends talk about restaurants, policy wonks rant about legislation, analysts trade stock tips, neighbors gossip, and teens chitchat. Further, such interpersonal communication affects everything from decision making and well-being (Asch, 1956; Mehl, Vazire, Holleran, & Clark, 2010) to the spread of ideas, the persistence of stereotypes, and the diffusion of culture (Heath, 1996; Heath, Bell, & Sternberg, 2001; Kashima, 2008; Schaller, Conway, & Tanchuk, 2002; Schaller & Crandall, 2004). But although it is clear that social transmission is both frequent and important, what drives people to share, and why are some stories and information shared more than others?"
]
} |
1503.02318 | 1934250397 | Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit1 and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10 accuracy (SVM+Deep Relative Attributes) -better than humans at 60.12 . Finally, we study how human prediction of image virality varies with different “contexts” in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available. | Analyzing viral images has received very little attention. Guerini al @cite_4 have provided correlations between low-level visual data and popularity on a non-anonymous social network (Google+), as well as the links between emotion and virality @cite_6 . Khosla al @cite_12 recently studied image popularity measured as the number of views a photograph has on Flickr. However, both previous works @cite_4 @cite_12 have only extracted image statistics for natural photographs (Google+, Flickr). Images and the social interactions on Reddit are qualitatively different ( many Reddit images are edited). In this sense, the quality of images that is most similar to ours is the concurrently introduced viral generator of Wang al, that combines NLP and Computer Vision (low level features) @cite_14 . However, our work delves deep into the role of intrinsic visual content (such as high-level image attributes), visual context surrounding an image, temporal contex and textual context in image virality. Lakkaraju al @cite_9 analyzed the effects of time of day, day of the week, number of resubmissions, captions, category, etc. on the virality of an image on Reddit. However, they do not analyze the content of the image itself. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_9",
"@cite_6",
"@cite_12"
],
"mid": [
"2294317440",
"1976811884",
"105848778",
"2155447859",
"2009627211"
],
"abstract": [
"The advent of social media has brought Internet memes, a unique social phenomenon, to the front stage of the Web. Embodied in the form of images with text descriptions, little do we know about the “language of memes”. In this paper, we statistically study the correlations among popular memes and their wordings, and generate meme descriptions from raw images. To do this, we take a multimodal approach—we propose a robust nonparanormal model to learn the stochastic dependencies among the image, the candidate descriptions, and the popular votes. In experiments, we show that combining text and vision helps identifying popular meme descriptions; that our nonparanormal model is able to learn dense and continuous vision features jointly with sparse and discrete text features in a principled manner, outperforming various competitive baselines; that our system can generate meme descriptions using a simple pipeline.",
"Reactions to posts in an online social network show different dynamics depending on several textual features of the corresponding content. Do similar dynamics exist when images are posted? Exploiting a novel dataset of posts, gathered from the most popular Google+ users, we try to give an answer to such a question. We describe several virality phenomena that emerge when taking into account visual characteristics of images (such as orientation, mean saturation, etc.). We also provide hypotheses and potential explanations for the dynamics behind them, and include cases for which common-sense expectations do not hold true in our experiments.",
"Creating, placing, and presenting social media content is a difficult problem. In addition to the quality of the content itself, several factors such as the way the content is presented (the title), the community it is posted to, whether it has been seen before, and the time it is posted determine its success. There are also interesting between these factors. For example, the language of the title should be targeted to the community where the content is submitted, yet it should also highlight the distinctive nature of the content. In this paper, we examine how these factors interact to determine the popularity of social media content. We do so by studying resubmissions, i.e., content that has been submitted multiple times, with multiple titles, to multiple different communities. Such data allows us to tease apart' the extent to which each factor influences the success of that content. The models we develop help us understand how to better target social media content: by using the right title, for the right community, at the right time.",
"This article provides a comprehensive investigation on the relations between virality of news articles and the emotions they are found to evoke. Virality, in our view, is a phenomenon with many facets, i.e. under this generic term several different effects of persuasive communication are comprised. By exploiting a high-coverage and bilingual corpus of documents containing metrics of their spread on social networks as well as a massive affective annotation provided by readers, we present a thorough analysis of the interplay between evoked emotions and viral facets. We highlight and discuss our findings in light of a cross-lingual approach: while we discover differences in evoked emotions and corresponding viral effects, we provide preliminary evidence of a generalized explanatory model rooted in the deep structure of emotions: the Valence-Arousal-Dominance (VAD) circumplex. We find that viral facets appear to be consistently affected by particular VAD configurations, and these configurations indicate a clear connection with distinct phenomena underlying persuasive communication.",
"When glancing at a magazine, or browsing the Internet, we are continuously being exposed to photographs. Despite of this overflow of visual information, humans are extremely good at remembering thousands of pictures along with some of their visual details. But not all images are equal in memory. Some stitch to our minds, and other are forgotten. In this paper we focus on the problem of predicting how memorable an image will be. We show that memorability is a stable property of an image that is shared across different viewers. We introduce a database for which we have measured the probability that each picture will be remembered after a single view. We analyze image features and labels that contribute to making an image memorable, and we train a predictor based on global image descriptors. We find that predicting image memorability is a task that can be addressed with current computer vision techniques. Whereas making memorable images is a challenging task in visualization and photography, this work is a first attempt to quantify this useful quality of images."
]
} |
1503.02318 | 1934250397 | Virality of online content on social networking websites is an important but esoteric phenomenon often studied in fields like marketing, psychology and data mining. In this paper we study viral images from a computer vision perspective. We introduce three new image datasets from Reddit1 and define a virality score using Reddit metadata. We train classifiers with state-of-the-art image features to predict virality of individual images, relative virality in pairs of images, and the dominant topic of a viral image. We also compare machine performance to human performance on these tasks. We find that computers perform poorly with low level features, and high level information is critical for predicting virality. We encode semantic information through relative attributes. We identify the 5 key visual attributes that correlate with virality. We create an attribute-based characterization of images that can predict relative virality with 68.10 accuracy (SVM+Deep Relative Attributes) -better than humans at 60.12 . Finally, we study how human prediction of image virality varies with different “contexts” in which the images are viewed, such as the influence of neighbouring images, images recently viewed, as well as the image title or caption. This work is a first step in understanding the complex but important phenomenon of image virality. Our datasets and annotations will be made publicly available. | Several works in computer vision have studied complex meta-phenomenon (as opposed to understanding the literal'' content in the image such as objects, scenes, 3D layout, etc.). Isola al @cite_32 found that some images are consistently more memorable than others across subjects and analyzed the image content that makes images memorable @cite_17 . Image aesthetics was studied in @cite_5 , image emotion in @cite_0 , and object recognition in art in @cite_20 . Importance of objects @cite_1 , attributes @cite_2 as well as scenes @cite_30 as defined by the likelihood that people mention them first in descriptions of the images has also been studied. We study a distinct complex phenomenon of image virality. | {
"cite_N": [
"@cite_30",
"@cite_1",
"@cite_32",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_20",
"@cite_17"
],
"mid": [
"2067816745",
"2059952380",
"",
"2075456404",
"2096544045",
"2080754665",
"300217873",
"2158155099"
],
"abstract": [
"What do people care about in an image? To drive computational visual recognition toward more human-centric outputs, we need a better understanding of how people perceive and judge the importance of content in images. In this paper, we explore how a number of factors relate to human perception of importance. Proposed factors fall into 3 broad types: 1) factors related to composition, e.g. size, location, 2) factors related to semantics, e.g. category of object or scene, and 3) contextual factors related to the likelihood of attribute-object, or object-scene pairs. We explore these factors using what people describe as a proxy for importance. Finally, we build models to predict what will be described about an image given either known image content, or image content estimated automatically by recognition systems.",
"How important is a particular object in a photograph of a complex scene? We propose a definition of importance and present two methods for measuring object importance from human observers. Using this ground truth, we fit a function for predicting the importance of each object directly from a segmented image; our function combines a large number of object-related and image-related features. We validate our importance predictions on 2,841 objects and find that the most important objects may be identified automatically. We find that object position and size are particularly informative, while a popular measure of saliency is not.",
"",
"We address the challenge of sentiment analysis from visual content. In contrast to existing methods which infer sentiment or emotion directly from visual low-level features, we propose a novel approach based on understanding of the visual concepts that are strongly related to sentiments. Our key contribution is two-fold: first, we present a method built upon psychological theories and web mining to automatically construct a large-scale Visual Sentiment Ontology (VSO) consisting of more than 3,000 Adjective Noun Pairs (ANP). Second, we propose SentiBank, a novel visual concept detector library that can be used to detect the presence of 1,200 ANPs in an image. The VSO and SentiBank are distinct from existing work and will open a gate towards various applications enabled by automatic sentiment analysis. Experiments on detecting sentiment of image tweets demonstrate significant improvement in detection accuracy when comparing the proposed SentiBank based predictors with the text-based approaches. The effort also leads to a large publicly available resource consisting of a visual sentiment ontology, a large detector library, and the training testing benchmark for visual sentiment analysis.",
"When we look at an image, some properties or attributes of the image stand out more than others. When describing an image, people are likely to describe these dominant attributes first. Attribute dominance is a result of a complex interplay between the various properties present or absent in the image. Which attributes in an image are more dominant than others reveals rich information about the content of the image. In this paper we tap into this information by modeling attribute dominance. We show that this helps improve the performance of vision systems on a variety of human-centric applications such as zero-shot learning, image search and generating textual descriptions of images.",
"With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”.",
"The objective of this work is to find objects in paintings by learning object-category classifiers from available sources of natural images. Finding such objects is of much benefit to the art history community as well as being a challenging problem in large-scale retrieval and domain adaptation.",
"Artists, advertisers, and photographers are routinely presented with the task of creating an image that a viewer will remember. While it may seem like image memorability is purely subjective, recent work shows that it is not an inexplicable phenomenon: variation in memorability of images is consistent across subjects, suggesting that some images are intrinsically more memorable than others, independent of a subjects' contexts and biases. In this paper, we used the publicly available memorability dataset of [13], and augmented the object and scene annotations with interpretable spatial, content, and aesthetic image properties. We used a feature-selection scheme with desirable explaining-away properties to determine a compact set of attributes that characterizes the memorability of any individual image. We find that images of enclosed spaces containing people with visible faces are memorable, while images of vistas and peaceful scenes are not. Contrary to popular belief, unusual or aesthetically pleasing scenes do not tend to be highly memorable. This work represents one of the first attempts at understanding intrinsic image memorability, and opens a new domain of investigation at the interface between human cognition and computer vision."
]
} |
1503.02398 | 2185523795 | In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm. | In @cite_36 the authors present an adaptation of the well known K-SVD dictionary learning algorithm to the co-sparse analysis operator setting. The training phase consists of two stages. In the first stage the rows of the operator that determine the subspace that each signal resides in are determined. In the subsequent stage each row of the operator is updated to be the vector that is most orthogonal'' to the signals associated with it. These two stages are repeated until a convergence criterion is met. | {
"cite_N": [
"@cite_36"
],
"mid": [
"1994281301"
],
"abstract": [
"The synthesis-based sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis-based model, where an analysis operator-hereafter referred to as the analysis dictionary-multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the K-SVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms-the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that defines the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary."
]
} |
1503.02398 | 2185523795 | In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm. | In @cite_0 it is postulated that the analysis operator is a uniformly normalized tight frame, i.e., the columns of the operator are orthogonal to each other while all rows have the same @math -norm. Given noise contaminated training samples, an algorithm is proposed that outputs an analysis operator as well as noise free approximations of the training data. This is achieved by an alternating two stage optimization algorithm. In the first stage the operator is updated using a projected subgradient algorithm, while in the second stage the signal estimation is updated using alternating direction method of multipliers (ADMM). | {
"cite_N": [
"@cite_0"
],
"mid": [
"2120047933"
],
"abstract": [
"We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterized by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimization framework based on L1 optimization. The reason for introducing a constraint in the optimization framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalized tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimization problem is not a convex program, we often find a local minimum using such variational methods. For two different settings, we provide preliminary theoretical support for the well-posedness of the learning problem, which can be practically used to test the local identifiability conditions of learnt operators."
]
} |
1503.02398 | 2185523795 | In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm. | A concept very similar to that of analysis operator learning is called sparsifying transform learning. In @cite_16 a framework for learning overcomplete sparsifying transforms is presented. This algorithm consists of two steps, a sparse coding step where the sparse coefficient is updated by only retaining the largest coefficients, and a transform update step where a standard conjugate gradient method is used and the resulting operator is obtained by normalizing the rows. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2171004446"
],
"abstract": [
"Adaptive sparse representations have been very popular in numerous applications in recent years. The learning of synthesis sparsifying dictionaries has particularly received much attention, and such adaptive dictionaries have been shown to be useful in applications such as image denoising, and magnetic resonance image reconstruction. In this work, we focus on the alternative sparsifying transform model, for which sparse coding is cheap and exact, and study the learning of tall or overcomplete sparsifying transforms from data. We propose various penalties that control the sparsifying ability, condition number, and incoherence of the learnt transforms. Our alternating algorithm for transform learning converges empirically, and significantly improves the quality of the learnt transform over the iterations. We present examples demonstrating the promising performance of adaptive overcomplete transforms over adaptive overcomplete synthesis dictionaries learnt using K-SVD, in the application of image denoising."
]
} |
1503.02398 | 2185523795 | In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm. | The authors of @cite_10 propose a method specialized on image processing. Instead of a patch-based approach, an image-based model is proposed with the goal of enforcing coherence across overlapping patches. In this framework, which is based on higher-order filter-based Markov Random Field models, all possible patches in the entire image are considered at once during the learning phase. A bi-level optimization scheme is proposed that has at its heart an unconstrained optimization problem w.r.t. the operator, which is solved using a quasi-Newton method. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2963811803"
],
"abstract": [
"We consider the analysis operator and synthesis dictionary learning problems based on the the 1 regularized sparse representation model. We reveal the internal relations between the 1-based analysis model and synthesis model. We then introduce an approach to learn both analysis operator and synthesis dictionary simultaneously by using a unified framework of bi-level optimization. Our aim is to learn a meaningful operator (dictionary) such that the minimum energy solution of the analysis (synthesis)-prior based model is as close as possible to the groundtruth. We solve the bi-level optimization problem using the implicit differentiation technique. Moreover, we demonstrate the effectiveness of our leaning approach by applying the learned analysis operator (dictionary) to the image denoising task and comparing its performance with state-of-the-art methods. Under this unified framework, we can compare the performance of the two types of priors."
]
} |
1503.02398 | 2185523795 | In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm. | Dong @cite_12 propose a method that alternates between a hard thresholding operation of the co-sparse representation and an operator update stage where all rows of the operator are simultaneously updated using a gradient method on the sphere. Their target function has the form @math , where @math is the sparse representation of the signal @math . | {
"cite_N": [
"@cite_12"
],
"mid": [
"2025948329"
],
"abstract": [
"We consider the dictionary learning problem for the analysis model based sparse representation. A novel algorithm is proposed by adapting the synthesis model based simultaneous codeword optimisation (SimCO) algorithm to the analysis model. This algorithm assumes that the analysis dictionary contains unit Ł 2 -norm atoms and trains the dictionary by the optimisation on manifolds. This framework allows one to update multiple dictionary atoms in each iteration, leading to a computationally efficient optimisation process. We demonstrate the competitive performance of the proposed algorithm using experiments on both synthetic and real data, as compared with three baseline algorithms, Analysis K-SVD, analysis operator learning (AOL) and learning overcomplete sparsifying transforms (LOST), respectively."
]
} |
1503.02398 | 2185523795 | In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm. | Finally, Hawe @cite_5 propose a geometric conjugate gradient algorithm on the product of spheres, where analysis operator properties like low coherence and full rank are incorporated as penalty functions in the learning process. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2150912629"
],
"abstract": [
"Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this paper, we present an algorithm for learning an analysis operator from training images. Our method is based on lp-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques."
]
} |
1503.02398 | 2185523795 | In the co-sparse analysis model a set of filters is applied to a signal out of the signal class of interest yielding sparse signal responses. As such, it may serve as a prior in inverse problems, or for structural analysis of signals that are known to belong to the signal class. The more the model is adapted to the class, the more reliable it is for these purposes. The task of learning such operators for a given class is therefore a crucial problem. In many applications, it is also required that the filter responses are obtained in a timely manner, which can be achieved by filters with a separable structure. Not only can operators of this sort be efficiently used for computing the filter responses, but they also have the advantage that less training samples are required to obtain a reliable estimate of the operator. The first contribution of this work is to give theoretical evidence for this claim by providing an upper bound for the sample complexity of the learning process. The second is a stochastic gradient descent (SGD) method designed to efficiently learn an analysis operators with separable structures, which incorporates an efficient step size selection. Numerical experiments are provided that link the sample complexity to the convergence speed of the SGD algorithm. | Except for our previous work @cite_28 , to our knowledge the only other analysis operator learning approach that offers a separable structure is proposed in @cite_7 for the two-dimensional setting. Therein, an algorithm is developed that takes as an input noisy 2D images @math and then attempts to find @math that minimize @math such that @math , where @math is a positive integer that serves as an upper bound on the number of non-zero entries, and the rows of @math and @math have unit norm. This problem is solved by alternating between a sparse coding stage and an operator update stage that is inspired by the work in @cite_36 and relies on singular value decompositions. | {
"cite_N": [
"@cite_28",
"@cite_36",
"@cite_7"
],
"mid": [
"2963935689",
"1994281301",
"2009741049"
],
"abstract": [
"The ability of having a sparse representation for a certain class of signals has many applications in data analysis, image processing, and other research fields. Among sparse representations, the cosparse analysis model has recently gained increasing interest. Many signals exhibit a multidimensional structure, e.g. images or three-dimensional MRI scans. Most data analysis and learning algorithms use vectorized signals and thereby do not account for this underlying structure. The drawback of not taking the inherent structure into account is a dramatic increase in computational cost. We propose an algorithm for learning a cosparse Analysis Operator that adheres to the preexisting structure of the data, and thus allows for a very efficient implementation. This is achieved by enforcing a separable structure on the learned operator. Our learning algorithm is able to deal with multi- dimensional data of arbitrary order. We evaluate our method on volumetric data at the example of three-dimensional MRI scans.",
"The synthesis-based sparse representation model for signals has drawn considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this paper we concentrate on an alternative, analysis-based model, where an analysis operator-hereafter referred to as the analysis dictionary-multiplies the signal, leading to a sparse outcome. Our goal is to learn the analysis dictionary from a set of examples. The approach taken is parallel and similar to the one adopted by the K-SVD algorithm that serves the corresponding problem in the synthesis model. We present the development of the algorithm steps: This includes tailored pursuit algorithms-the Backward Greedy and the Optimized Backward Greedy algorithms, and a penalty function that defines the objective for the dictionary update stage. We demonstrate the effectiveness of the proposed dictionary learning in several experiments, treating synthetic data and real images, and showing a successful and meaningful recovery of the analysis dictionary.",
"An analysis sparse model represents an image signal by multiplying it using an analysis dictionary, leading to a sparse outcome. It transforms an image (two dimensional signal) into a one-dimensional (1D) vector. However, this 1D model ignores the two dimensional property and breaks the local spatial correlation inside images. In this paper, we propose a two dimensional (2D) analysis sparse model. Our 2D model uses two analysis dictionaries to efficiently exploit the horizontal and vertical features simultaneously. The corresponding sparse coding and dictionary learning algorithm are also presented in this paper. The 2D sparse model is further evaluated for image denoising. Experimental results demonstrate our 2D analysis sparse model outperforms a state-of-the-art 1D analysis model in terms of both denoising ability and memory usage."
]
} |
1503.02123 | 1918162579 | Information-Centric Networking (ICN) is an internetworking paradigm that offers an alternative to the current IP-based Internet architecture. ICN's most distinguishing feature is its emphasis on information (content) instead of communication endpoints. One important open issue in ICN is whether negative acknowledgments (NACKs) at the network layer are useful for notifying downstream nodes about forwarding failures, or requests for incorrect or non-existent information. In benign settings, NACKs are beneficial for ICN architectures, such as CCNx and NDN, since they flush state in routers and notify consumers. In terms of security, NACKs seem useful as they can help mitigating so-called Interest Flooding attacks. However, as we show in this paper, network-layer NACKs also have some unpleasant security implications. We consider several types of NACKs and discuss their security design requirements and implications. We also demonstrate that providing secure NACKs triggers the threat of producer-bound flooding attacks. Although we discuss some potential countermeasures to these attacks, the main conclusion of this paper is that network-layer NACKs are best avoided, at least for security reasons. | In broadcast (one-to-many) communications, NACKs are preferred over ACKs to reduce network congestion and packets collision @cite_18 . The reason is because using selective NACKs allows reducing the number of packets sent by receivers, hence reducing the probability of packet collision. However, NACK based mechanisms are prone to NACK implosion. In case of packet loss, the sender receives many NACKs from all receivers. @cite_7 propose a time-based mechanism to reduce NACK implosion. Every receiver detecting a packet loss initiates a random timer. The receiver having the shortest random interval unicasts a NACK to the sender, which immediately multicasts the NACK to the other receivers. All other receivers having the same missing packet thereupon suppress their own NACKs. In @cite_25 , demonstrate that the delay incurred by a NACK-suppression mechanism does not affect the performance of NACK multicast control flow. | {
"cite_N": [
"@cite_18",
"@cite_25",
"@cite_7"
],
"mid": [
"2166910390",
"2158914851",
"1865238898"
],
"abstract": [
"Sender-initiated reliable multicast protocols based on the use of positive acknowledgments (ACKs) can suffer performance degradation as the number of receivers increases. This degradation is due to the fact that the sender must bear much of the complexity associated with reliable data transfer (e.g., maintaining state information and timers for each of the receivers and responding to receivers' ACKs). A potential solution to this problem is to shift the burden of providing reliable data transfer to the receivers-thus resulting in receiver-initiated multicast error control protocols based on the use of negative acknowledgments (NAKs). We determine the maximum throughputs for generic sender-initiated and receiver-initiated protocols for two classes of applications: (1) one-many applications where one participant sends data to a set of receivers and (2) many-many applications where all participants simultaneously send and receive data to from each other. We show that a receiver-initiated error control protocol which requires receivers to transmit NAKs point-to-point to the sender provides higher throughput than a sender-initiated counterpart for both classes of applications. We further demonstrate that, in the case of a one many application, replacing point-to-point transfer of NAKs with multicasting of NAKs coupled with a random backoff procedure provides a substantial additional increase in the throughput of a receiver-initiated error control protocol over a sender-initiated protocol. We also find, however, that such a modification leads to a throughput degradation in the case of many-many applications.",
"In this paper, we evaluate the performance of flow control schemes for reliable multicast under several retransmission approaches in terms of scalability. The schemes examined are a window-based flow control scheme for ACK-based retransmission approaches and a rate-based flow control scheme for NAK-based retransmission approaches. Our simulation results show that the NAK-based flow control scheme has better scalability and higher throughput than the ACK-based flow control scheme, and the delay incurred by a NAK-suppression mechanism does not affect the performance of multicast flow control.",
"In a multicasting system content is multicast from a sender to a plurality of receivers over a data network. Each receiver independently determines whether it is missing elements or packets of the content. Receivers having missing content each initiate a random timer. The receiver having the shortest random interval unicasts a negative acknowledgement to the sender, which immediately multicasts the negative acknowledgement to the other receivers. All other receivers having the same missing packet thereupon suppress their own negative acknowledgements as to that packet. A repair transmission is then multicast by the sender to all receivers. The random intervals have upper and lower bounds according to the round trip transmission time and the size of the largest missing data element."
]
} |
1503.02123 | 1918162579 | Information-Centric Networking (ICN) is an internetworking paradigm that offers an alternative to the current IP-based Internet architecture. ICN's most distinguishing feature is its emphasis on information (content) instead of communication endpoints. One important open issue in ICN is whether negative acknowledgments (NACKs) at the network layer are useful for notifying downstream nodes about forwarding failures, or requests for incorrect or non-existent information. In benign settings, NACKs are beneficial for ICN architectures, such as CCNx and NDN, since they flush state in routers and notify consumers. In terms of security, NACKs seem useful as they can help mitigating so-called Interest Flooding attacks. However, as we show in this paper, network-layer NACKs also have some unpleasant security implications. We consider several types of NACKs and discuss their security design requirements and implications. We also demonstrate that providing secure NACKs triggers the threat of producer-bound flooding attacks. Although we discuss some potential countermeasures to these attacks, the main conclusion of this paper is that network-layer NACKs are best avoided, at least for security reasons. | In 802.11 networks, selective NACKs can be used for the RTS CTS handshake mechanism in order to reduces network congestion and packets collision. The result is a considerable throughput improvement and delay reduction @cite_8 @cite_29 . In @cite_1 , NACKs at data-link layer are combined with NACKs at transport layer in order to improve video streaming performance over 3G cellular networks. In case of frame loss, a mobile device sends a selective data-link NACK to the base station. If the list frame has not been recovered after several successive NACKs, a transport-layer NACK is sent requesting resending the entire packet. | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_8"
],
"mid": [
"2033535733",
"2159018930",
"2166296040"
],
"abstract": [
"The main objective of the MAC protocol in IEEE 802.11 standard is to access and control the shared limited bandwidth medium efficiently and fairly among all nodes. A maximum throughput is difficult to achieve due to the limited bandwidth of the wireless ad hoc networks, packet overhead, and hidden and the exposed terminal problems. The use of NACK control packets instead of ACKs can minimize network congestion and packet collisions. In this paper, we implement and investigate the use of NACKs over the RTS CTS handshake mechanism for single-hop wireless ad-hoc networks. The results show a significant performance improvement of throughput, and system delay for various network sizes and BERs.",
"The wireless channel is time-varying where burst packet losses often occur during the fading or lossy handovers. In order to avoid unaccepted quality degradation of video streaming over 3G cellular networks, we propose and analyze a client-driven scalable cross-layer (CSC) retransmission scheme. Considering the perceptual importance of different video partitions under the real-time and bandwidth constraints, the proposed scheme uses the radio link-layer retransmission with priority to adapt conventional packet losses in wireless channels; furthermore, it uses the adaptive transport-layer retransmission to provide end-to-end quality-of-service (QoS) guarantees over cellular networks. The simulation experiments show that the proposed scheme can effectively improve the perceptual quality of 3G video streaming as compared to the traditional deadline-based scheme without the prioritized link-layer retransmission.",
"We present a reliable broadcast protocol designed for use in ad hoc networks. The protocol ensures reliable, in-order, duplicate-free delivery of objects to a node's one-hop neighbor set. The flexibility and efficiency of the protocol hinges on the fact that the responsibility for reliable delivery rests solely at the receiver. Furthermore, when used on top of an 802.11-like MAC layer, we show that characteristics of an RTS CTS mechanism can be used to further improve protocol performance."
]
} |
1503.02417 | 2950314339 | Linguistic structures exhibit a rich array of global phenomena, however commonly used Markov models are unable to adequately describe these phenomena due to their strong locality assumptions. We propose a novel hierarchical model for structured prediction over sequences and trees which exploits global context by conditioning each generation decision on an unbounded context of prior decisions. This builds on the success of Markov models but without imposing a fixed bound in order to better represent global phenomena. To facilitate learning of this large and unbounded model, we use a hierarchical Pitman-Yor process prior which provides a recursive form of smoothing. We propose prediction algorithms based on A* and Markov Chain Monte Carlo sampling. Empirical results demonstrate the potential of our model compared to baseline finite-context Markov models on part-of-speech tagging and syntactic parsing. | Earlier work in syntactic parsing has also looked into growing both the history vertically and the rules horizontally, in a setting. @cite_1 has increased the history for the parsing task by parent-annotation, i.e., annotating each non-terminal in the training parse trees by its parent, and then reading off the grammar rules from the resulting trees. @cite_22 have considered vertical and horizontal markovization while using the head words' part-of-speech tag, and showed that increasing the size of the vertical contexts consistently improves the parsing performance. @cite_25 , @cite_16 and @cite_14 have treated non-terminal annotations as latent variables and estimated them from the data. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_1",
"@cite_16",
"@cite_25"
],
"mid": [
"2152561660",
"2097606805",
"1551104980",
"3313028",
""
],
"abstract": [
"This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Fine-grained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6 (F1, sentences ≤ 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.",
"We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36 (LP LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.",
"The kinds of tree representations used in a treebank corpus can have a dramatic effect on performance of a parser based on the PCFG estimated from that corpus, causing the estimated likelihood of a tree to differ substantially from its frequency in the training corpus. This paper points out that the Penn II treebank representations are of the kind predicted to have such an effect, and describes a simple node relabeling transformation that improves a treebank PCFG-based parser's average precision and recall by around 8 , or approximately half of the performance difference between a simple PCFG model and the best broad-coverage parsers available today. This performance variation comes about because any PCFG, and hence the corpus of trees from which the PCFG is induced, embodies independence assumptions about the distribution of words and phrases. The particular independence assumptions implicit in a tree representation can be studied theoretically and investigated empirically by means of a tree transformation detransformation process.",
"Treebank parsing can be seen as the search for an optimally refined grammar consistent with a coarse training treebank. We describe a method in which a minimal grammar is hierarchically refined using EM to give accurate, compact grammars. The resulting grammars are extremely compact compared to other high-performance parsers, yet the parser gives the best published accuracies on several languages, as well as the best generative parsing numbers in English. In addition, we give an associated coarse-to-fine inference scheme which vastly improves inference time with no loss in test set accuracy.",
""
]
} |
1503.02417 | 2950314339 | Linguistic structures exhibit a rich array of global phenomena, however commonly used Markov models are unable to adequately describe these phenomena due to their strong locality assumptions. We propose a novel hierarchical model for structured prediction over sequences and trees which exploits global context by conditioning each generation decision on an unbounded context of prior decisions. This builds on the success of Markov models but without imposing a fixed bound in order to better represent global phenomena. To facilitate learning of this large and unbounded model, we use a hierarchical Pitman-Yor process prior which provides a recursive form of smoothing. We propose prediction algorithms based on A* and Markov Chain Monte Carlo sampling. Empirical results demonstrate the potential of our model compared to baseline finite-context Markov models on part-of-speech tagging and syntactic parsing. | Likewise, finite-state hidden Markov models (HMMs) have been extended to have countably infinite number of states @cite_9 . Previous works on applying Markov models to part-of-speech tagging either considered finite-order Markov models @cite_26 , or finite-order HMM @cite_17 . We differ from these works by conditioning the emissions and transitions on their contexts. | {
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_17"
],
"mid": [
"",
"2950121111",
"2155280192"
],
"abstract": [
"",
"Trigrams'n'Tags (TnT) is an efficient statistical part-of-speech tagger. Contrary to claims found elsewhere in the literature, we argue that a tagger based on Markov models performs at least as well as other current approaches, including the Maximum Entropy framework. A recent comparison has even shown that TnT performs significantly better for the tested corpora. We describe the basic model of TnT, the techniques used for smoothing and for handling unknown words. Furthermore, we present evaluations on two corpora.",
"This paper describes an extension to the hidden Markov model for part-of-speech tagging using second-order approximations for both contextual and lexical probabilities. This model increases the accuracy of the tagger to state of the art levels. These approximations make use of more contextual information than standard statistical systems. New methods of smoothing the estimated probabilities are also introduced to address the sparse data problem."
]
} |
1503.02413 | 1908328866 | Resource allocation for cloud services is a complex task due to the diversity of the services and the dynamic workloads. One way to address this is by overprovisioning which results in high cost due to the unutilized resources. A much more economical approach, relying on the stochastic nature of the demand, is to allocate just the right amount of resources and use additional more expensive mechanisms in case of overflow situations where demand exceeds the capacity. In this paper we study this approach and show both by comprehensive analysis for independent normal distributed demands and simulation on synthetic data that it is significantly better than currently deployed methods. | Early work on VM placement (e.g., @cite_2 @cite_0 @cite_4 ) models the problem as a deterministic bin packing problem, namely, for every service there is an estimate of its demand. Stochastic bin packing was first suggested by Kleinberg, Rabani and Tardos @cite_7 for statistical multiplexing. @cite_7 mostly considered Bernoulli-type distributions. Goel and Indyk @cite_1 further studied Poisson and exponential distributions. Wang, Meng and Zhang @cite_3 suggested to model real data with the . Thus, the input to their stochastic packing problem is @math independent services each with demand requirement distributed according to a distribution @math that is normal with mean @math and variance @math . The output is some partition of the services to bins in a way that minimizes a target function that differs from problem to problem. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_2"
],
"mid": [
"2129532872",
"2044081853",
"1825072153",
"2110458625",
"2160756722",
"7574676"
],
"abstract": [
"Renewed focus on virtualization technologies and increased awareness about management and power costs of running under-utilized servers has spurred interest in consolidating existing applications on fewer number of servers in the data center. The ability to migrate virtual machines dynamically between physical servers in real-time has also added a dynamic aspect to consolidation. However, there is a lack of planning tools that can analyze historical data collected from an existing environment and compute the potential benefits of server consolidation especially in the dynamic setting. In this paper we describe such a consolidation recommendation tool, called ReCon. Recon takes static and dynamic costs of given servers, the costs of VM migration, the historical resource consumption data from the existing environment and provides an optimal dynamic plan of VM to physical server mapping over time. We also present the results of applying the tool on historical data obtained from a large production environment.",
"In this paper, we undertake the first study of statistical multiplexing from the perspective of approximation algorithms. The basic issue underlying statistical multiplexing is the following: in high-speed networks, individual connections (i.e., communication sessions) are very bursty, with transmission rates that vary greatly over time. As such, the problem of packing multiple connections together on a link becomes more subtle than in the case when each connection is assumed to have a fixed demand. We consider one of the most commonly studied models in this domain: that of two communicating nodes connected by a set of parallel edges, where the rate of each connection between them is a random variable. We consider three related problems: (1) stochastic load balancing, (2) stochastic bin-packing, and (3) stochastic knapsack. In the first problem the number of links is given and we want to minimize the expected value of the maximum load. In the other two problems the link capacity and an allowed overflow probability p are given, and the objective is to assign connections to links, so that the probability that the load of a link exceeds the link capacity is at most @math . In bin-packing we need to assign each connection to a link using as few links as possible. In the knapsack problem each connection has a value, and we have only one link. The problem is to accept as many connections as possible. For the stochastic load balancing problem we give an O(1)-approximation algorithm for arbitrary random variables. For the other two problems we have algorithms restricted to on-off sources (the most common special case studied in the statistical multiplexing literature), with a somewhat weaker range of performance guarantees. A standard approach that has emerged for dealing with probabilistic resource requirements is the notion of effective bandwidth---this is a means of associating a fixed demand with a bursty connection that \"represents\" its distribution as closely as possible. Our approximation algorithms make use of the standard definition of effective bandwidth and also a new one that we introduce; the performance guarantees are based on new results showing that a combination of these measures can be used to provide bounds on the optimal solution.",
"We study the problems of makespan minimization (load balancing), knapsack, and bin packing when the jobs have stochastic processing requirements or sizes. If the jobs are all Poisson, we present a two approximation for the first problem using Graham's rule, and observe that polynomial time approximation schemes can be obtained for the last two problems. If the jobs are all exponential, we present polynomial time approximation schemes for all three problems. We also obtain quasi-polynomial time approximation schemes for the last two problems if the jobs are Bernoulli variables.",
"Recent advances in virtualization technology have made it a common practice to consolidate virtual machines(VMs) into a fewer number of servers. An efficient consolidation scheme requires that VMs are packed tightly, yet receive resources commensurate with their demands. However, measurements from production data centers show that the network bandwidth demands of VMs are dynamic, making it difficult to characterize the demands by a fixed value and to apply traditional consolidation schemes. In this work, we formulate the VM consolidation into a Stochastic Bin Packing problem and propose an online packing algorithm by which the number of servers required is within equation of the optimum for any ∈ > 0. The result can be improved to within equation of the optimum in a special case. In addition, we use numerical experiments to evaluate the proposed consolidation algorithm and observe 30 server reduction compared to several benchmark algorithms.",
"A dynamic server migration and consolidation algorithm is introduced. The algorithm is shown to provide substantial improvement over static server consolidation in reducing the amount of required capacity and the rate of service level agreement violations. Benefits accrue for workloads that are variable and can be forecast over intervals shorter than the time scale of demand variability. The management algorithm reduces the amount of physical capacity required to support a specified rate of SLA violations for a given workload by as much as 50 as compared to static consolidation approach. Another result is that the rate of SLA violations at fixed capacity may be reduced by up to 20 . The results are based on hundreds of production workload traces across a variety of operating systems, applications, and industries.",
"There is disclosed a vehicle wash apparatus including a wash arch frame carried on tracks extending longitudinally of a wash stall in which a car to be washed is parked. Overhead arms are pivotally carried on their respective one ends from an overhead transverse horizontal pivot shaft and mount a horizontally disposed rotary top brush from the free ends thereof for selective lowering into a position for contacting the front of a vehicle to wash the grill and for travel rearwardly along the hood, top and trunk of the vehicle. Side brush arms are carried pivotally from the sides of the arch frame for rotation about vertical axes and project in a direction opposite the direction in which the top brush arms, in their retracted positions, project and are urged inwardly to wash the sides of the car as the arch carries the top brush rearwardly on the tracks. Such side arms are pivoted inwardly as the side brushes clear the rear corners of the car to carry the side brushes about the rear corners of the vehicle to wash the rear thereof as the top brush completes washing of the top of the trunk."
]
} |
1503.02413 | 1908328866 | Resource allocation for cloud services is a complex task due to the diversity of the services and the dynamic workloads. One way to address this is by overprovisioning which results in high cost due to the unutilized resources. A much more economical approach, relying on the stochastic nature of the demand, is to allocate just the right amount of resources and use additional more expensive mechanisms in case of overflow situations where demand exceeds the capacity. In this paper we study this approach and show both by comprehensive analysis for independent normal distributed demands and simulation on synthetic data that it is significantly better than currently deployed methods. | A naive approach to such a problem is to reduce it to classical bin packing as follows: for the @math 'th service define the as the number @math such that the probability that @math is larger than @math is small; then solve the classical bin packing problem (or a variant of it) with item sizes @math . However, @cite_3 showed this approach can be quite wasteful, mostly because it adds extra space per service and not per bin. To demonstrate the issue, think about unbiased, independent coin tosses. The probability one coin toss significantly deviates from its mean is @math , while the probability @math independent coin tosses significantly deviate from the mean is exponentially small. When running independent trials there is a effect that considerably reduces the chance of high deviations. This can also be seen from the fact that the standard deviation of @math independent, identical processes is only @math times the standard deviation of one process, and so the standard deviation grows much slower than the number of processes. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2110458625"
],
"abstract": [
"Recent advances in virtualization technology have made it a common practice to consolidate virtual machines(VMs) into a fewer number of servers. An efficient consolidation scheme requires that VMs are packed tightly, yet receive resources commensurate with their demands. However, measurements from production data centers show that the network bandwidth demands of VMs are dynamic, making it difficult to characterize the demands by a fixed value and to apply traditional consolidation schemes. In this work, we formulate the VM consolidation into a Stochastic Bin Packing problem and propose an online packing algorithm by which the number of servers required is within equation of the optimum for any ∈ > 0. The result can be improved to within equation of the optimum in a special case. In addition, we use numerical experiments to evaluate the proposed consolidation algorithm and observe 30 server reduction compared to several benchmark algorithms."
]
} |
1503.02413 | 1908328866 | Resource allocation for cloud services is a complex task due to the diversity of the services and the dynamic workloads. One way to address this is by overprovisioning which results in high cost due to the unutilized resources. A much more economical approach, relying on the stochastic nature of the demand, is to allocate just the right amount of resources and use additional more expensive mechanisms in case of overflow situations where demand exceeds the capacity. In this paper we study this approach and show both by comprehensive analysis for independent normal distributed demands and simulation on synthetic data that it is significantly better than currently deployed methods. | Breitgand and Epstein @cite_6 , building on @cite_3 , suggest an algorithm for stochastic bin packing that takes advantage of this smoothing effect. The algorithm assumes all bins have equal capacity. The algorithm first sorts the processes by their variance to mean ratio (VMR), i.e., @math . Then the algorithm finds the largest prefix of the sorted list such that allocating that set of services to the first bin makes the probability the first bin overflows at most @math . The algorithm then proceeds bin by bin, each time allocating a prefix of the remaining services on the sorted list to the next bin. @cite_6 show that if we allow solutions, i.e., we allow splitting services between bins, the algorithm finds an optimal solution, and also show an online, integral version that gives a @math -approximation to the optimum. | {
"cite_N": [
"@cite_3",
"@cite_6"
],
"mid": [
"2110458625",
"1998448657"
],
"abstract": [
"Recent advances in virtualization technology have made it a common practice to consolidate virtual machines(VMs) into a fewer number of servers. An efficient consolidation scheme requires that VMs are packed tightly, yet receive resources commensurate with their demands. However, measurements from production data centers show that the network bandwidth demands of VMs are dynamic, making it difficult to characterize the demands by a fixed value and to apply traditional consolidation schemes. In this work, we formulate the VM consolidation into a Stochastic Bin Packing problem and propose an online packing algorithm by which the number of servers required is within equation of the optimum for any ∈ > 0. The result can be improved to within equation of the optimum in a special case. In addition, we use numerical experiments to evaluate the proposed consolidation algorithm and observe 30 server reduction compared to several benchmark algorithms.",
"Current trends in virtualization, green computing, and cloud computing require ever increasing efficiency in consolidating virtual machines without degrading quality of service. In this work, we consider consolidating virtual machines on the minimum number of physical containers (e.g., hosts or racks) in a cloud where the physical network (e.g., network interface or top of the rack switch link) may become a bottleneck. Since virtual machines do not simultaneously use maximum of their nominal bandwidth, the capacity of the physical container can be multiplexed. We assume that each virtual machine has a probabilistic guarantee on realizing its bandwidth Requirements-as derived from its Service Level Agreement with the cloud provider. Therefore, the problem of consolidating virtual machines on the minimum number of physical containers, while preserving these bandwidth allocation guarantees, can be modeled as a Stochastic Bin Packing (SBP) problem, where each virtual machine's bandwidth demand is treated as a random variable. We consider both offline and online versions of SBP. Under the assumption that the virtual machines' bandwidth consumption obeys normal distribution, we show a 2-approximation algorithm for the offline version and improve the previously reported results by presenting a (2 +∈)-competitive algorithm for the online version. We also observe that a dual polynomial-time approximation scheme (PTAS) for SBP can be obtained via reduction to the two-dimensional vector bin packing problem. Finally, we perform a thorough performance evaluation study using both synthetic and real data to evaluate the behavior of our proposed algorithms, showing their practical applicability."
]
} |
1503.01655 | 1886356758 | Hyperlinks and other relations in Wikipedia are a extraordinary resource which is still not fully understood. In this paper we study the different types of links in Wikipedia, and contrast the use of the full graph with respect to just direct links. We apply a well-known random walk algorithm on two tasks, word relatedness and named-entity disambiguation. We show that using the full graph is more effective than just direct links by a large margin, that non-reciprocal links harm performance, and that there is no benefit from categories and infoboxes, with coherent results on both tasks. We set new state-of-the-art figures for systems based on Wikipedia links, comparable to systems exploiting several information sources and or supervised machine learning. Our approach is open source, with instruction to reproduce results, and amenable to be integrated with complementary text-based methods. | For easier exposition, we will examine the results by row section simultaneously on relatedness and NED. The in Table report four relatedness systems which have already been presented in Sect. , showing that our system is best in all five datasets. Note that the @cite_0 row was obtained running their publicly available system with the supervised Machine Learning component turned off (see below for the results using SUP). The top rows of table report the most frequent baseline (as produced by our dictionary) and three link-based systems (cf. Sect. ), showing that our method is best in all five datasets. These results show that the use of the full graph as devised in this paper is a winning strategy. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2031046392"
],
"abstract": [
"The online encyclopedia Wikipedia is a vast, constantly evolving tapestry of interlinked articles. For developers and researchers it represents a giant multilingual database of concepts and semantic relations, a potential resource for natural language processing and many other research areas. This paper introduces the Wikipedia Miner toolkit, an open-source software system that allows researchers and developers to integrate Wikipedia@?s rich semantics into their own applications. The toolkit creates databases that contain summarized versions of Wikipedia@?s content and structure, and includes a Java API to provide access to them. Wikipedia@?s articles, categories and redirects are represented as classes, and can be efficiently searched, browsed, and iterated over. Advanced features include parallelized processing of Wikipedia dumps, machine-learned semantic relatedness measures and annotation features, and XML-based web services. Wikipedia Miner is intended to be a platform for sharing data mining techniques."
]
} |
1503.01655 | 1886356758 | Hyperlinks and other relations in Wikipedia are a extraordinary resource which is still not fully understood. In this paper we study the different types of links in Wikipedia, and contrast the use of the full graph with respect to just direct links. We apply a well-known random walk algorithm on two tasks, word relatedness and named-entity disambiguation. We show that using the full graph is more effective than just direct links by a large margin, that non-reciprocal links harm performance, and that there is no benefit from categories and infoboxes, with coherent results on both tasks. We set new state-of-the-art figures for systems based on Wikipedia links, comparable to systems exploiting several information sources and or supervised machine learning. Our approach is open source, with instruction to reproduce results, and amenable to be integrated with complementary text-based methods. | The relatedness results in the of Table include several systems using WordNet and or Wikipedia (cf. Sect. ), including the system in @cite_43 , which we run out-of-the-box with default values. To date, link-based systems using WordNet had reported stronger results than their counterparts on Wikipedia, but the table shows that our Wikipedia-based results are the strongest on all relatedness datasets but one (MC, the smallest dataset, with only 30 pairs). In addition, the table shows our results when combining random walks on Wikipedia and WordNet We multiply the scores of on Wikipedia and WordNet. , which yields improvements in most datasets. In the counterpart for NED in Table , outperform our system, specially in the smaller KORE (143 instances), but note that they use a richer graph which combines WordNet, the English Wikipedia and hyperlinks from other language Wikipedias. | {
"cite_N": [
"@cite_43"
],
"mid": [
"2130173309"
],
"abstract": [
"Graph-based similarity over WordNet has been previously shown to perform very well on word similarity. This paper presents a study of the performance of such a graph-based algorithm when using different relations and versions of Wordnet. Some of the relations are part of the official release of WordNet, and others have been derived automatically. The results show that using the adequate relations the performance improves over previously published WordNet-based results on the WordSim353 dataset. The similarity software and some graphs used in this paper are publicly available at http: ixa2.si.ehu.es ukb."
]
} |
1503.01811 | 1936395948 | We develop a worst-case analysis of aggregation of classifier ensembles for binary classification. The task of predicting to minimize error is formulated as a game played over a given set of unlabeled data (a transductive setting), where prior label information is encoded as constraints on the game. The minimax solution of this game identifies cases where a weighted combination of the classifiers can perform significantly better than any single classifier. | Weighted majority votes are a nontrivial ensemble aggregation method that has received focused theoretical attention for classification. Of particular note is the literature on boosting for forming ensembles, in which the classic work of @cite_13 shows general bounds on the error of a weighted majority vote @math under any distribution @math , based purely on the distribution of a version of the margin on labeled data. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1975846642"
],
"abstract": [
"One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition."
]
} |
1503.01811 | 1936395948 | We develop a worst-case analysis of aggregation of classifier ensembles for binary classification. The task of predicting to minimize error is formulated as a game played over a given set of unlabeled data (a transductive setting), where prior label information is encoded as constraints on the game. The minimax solution of this game identifies cases where a weighted combination of the classifiers can perform significantly better than any single classifier. | Our emphasis on the benefit of considering global effects (our transductive setting) even when data are i.i.d. is in the spirit of the idea of shrinkage, well known in statistical literature since at least the James-Stein estimator ( @cite_8 ). | {
"cite_N": [
"@cite_8"
],
"mid": [
"1991769471"
],
"abstract": [
"A compact multipurpose motion picture film handling cassette useful during exposure, processing and projection operations. An exposure station is located adjacent a corner of the cassette. This arrangement permits the cassette to be mounted in the handle section of a uniquely compact camera and, then subsequently, in a uniquely compact processor-projector unit. The cassette may also include a normally inoperative film processing station. A resilient member of the cassette extends in spaced relationship to idlers intermediate of the exposure station and the cassette's takeup reel and the cassette includes means to receive a force applying member into operative relationship with this resilient member. This resilient member automatically snubs the adjacent idlers when mounted in the camera and is adapted to selectively snub these same idlers when the cassette is in the processor-projector unit."
]
} |
1503.01706 | 2951202566 | Consider the continuum of points along the edges of a network, i.e., a connected, undirected graph with positive edge weights. We measure the distance between these points in terms of the weighted shortest path distance, called the network distance. Within this metric space, we study farthest points and farthest distances. We introduce a data structure supporting queries for the farthest distance and the farthest points on two-terminal series-parallel networks. This data structure supports farthest-point queries in @math time after @math construction time, where @math is the number of farthest points, @math is the size of the network, and @math parallel operations are required to generate the network. | A network Voronoi diagram subdivides a network depending on which site is closest @cite_3 or farthest @cite_12 @cite_1 among a finite set of sites. Any data structure for farthest-point queries on a network represents a network farthest-point Voronoi diagram where all points on the network are considered sites @cite_5 . | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_12",
"@cite_3"
],
"mid": [
"",
"2014635219",
"2000879295",
"2168409112"
],
"abstract": [
"",
"In the real world, there are many phenomena that occur on a network or alongside a network; for example, traffic accidents on highways and retail stores along streets in an urbanized area. In the literature, these phenomena are analysed under the assumption that distance is measured with Euclidean distance on a plane. This paper first examines this assumption and shows an empirical finding that Euclidean distance is significantly different from the shortest path distance in an urbanized area if the distance is less than 500 m. This implies that service areas in urbanized areas cannot be well represented by Voronoi diagrams defined on a plane with Euclidean distance, termed generalized planar Voronoi diagrams. To overcome this limitation, second, this paper formulates six types of Voronoi diagrams defined on a network, termed generalized network Voronoi diagrams, whose generators are given by points, sets of points, lines and polygons embedded in a network, and whose distances are given by inward outward distances, and additively multiplicatively weighted shortest path distances. Third, in comparison with the generalized planar Voronoi diagrams, the paper empirically shows that the generalized network Voronoi diagrams can more precisely represent the service areas in urbanized areas than the corresponding planar Voronoi diagrams. Fourth, because the computational methods for constructing the generalized planar Voronoi diagrams in the literature cannot be applied to constructing the generalized network Voronoi diagrams, the paper provides newly developed efficient algorithms using the 'extended' shortest path trees. Last, the paper develops user-friendly tools (that are included in SANET, a toolbox for spatial analysis on a network) for executing these computational methods in a GIS environment.",
"The Voronoi diagram is a famous structure of computational geometry. We show that there is a straightforward equivalent in graph theory which can be efficiently computed. In particular, we give two algorithms for the computation of graph Voronoi diagrams, prove a lower bound on the problem, and identify cases where the algorithms presented are optimal. The space requirement of a graph Voronoi diagram is modest, since it needs no more space than does the graph itself. The investigation of graph Voronoi diagrams is motivated by many applications and problems on networks that can be easily solved with their help. This includes the computation of nearest facilities, all nearest neighbors and closest pairs, some kind of collision free moving, and anticenters and closest points. © 2000 John Wiley & Sons, Inc.",
"Given a network N(V, E) and a set of points Xp = x1, …, xp on N, we first present an algorithm for computing the Voronoi partition of N(V, E) into territories T(x1), …, T(xp). After describing two ways to measure the “size” of a territory, we introduce and discuss the more challenging problem of selecting Xp so that the maximum size among the resulting territories is as small as possible. For one especially natural way to measure the size of a territory, we show that this latter problem is NP-complete when p is part of the input, but that the problem can be solved in polynomial time for any fixed p. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499."
]
} |
1503.01913 | 2952930431 | We consider a solution of automata similar to Population Protocols and Network Constructors. The automata (or nodes) move passively in a well-mixed solution and can cooperate by interacting in pairs. Every such interaction may result in an update of the local states of the nodes. Additionally, the nodes may also choose to connect to each other in order to start forming some required structure. We may think of such nodes as the smallest possible programmable pieces of matter. The model that we introduce here is a more applied version of Network Constructors, imposing physical (or geometrical) constraints on the connections. Each node can connect to other nodes only via a very limited number of local ports, therefore at any given time it has only a bounded number of neighbors. Connections are always made at unit distance and are perpendicular to connections of neighboring ports. We show that this restricted model is still capable of forming very practical 2D or 3D shapes. We provide direct constructors for some basic shape construction problems. We then develop new techniques for determining the constructive capabilities of our model. One of the main novelties of our approach, concerns our attempt to overcome the inability of such systems to detect termination. In particular, we exploit the assumptions that the system is well-mixed and has a unique leader, in order to give terminating protocols that are correct with high probability (w.h.p.). This allows us to develop terminating subroutines that can be sequentially composed to form larger modular protocols. One of our main results is a terminating protocol counting the size @math of the system w.h.p.. We then use this protocol as a subroutine in order to develop our universal constructors, establishing that the nodes can self-organize w.h.p. into arbitrarily complex shapes while still detecting termination of the construction. | Nature has an intrinsic ability to form complex structures and networks via a process known as . By self-assembly, small components (like e.g. molecules) automatically assemble into large, and usually complex structures (like e.g. a crystal). There is an abundance of such examples in the physical world. Lipid molecules form a cell's membrane, ribosomal proteins and RNA coalesce into functional ribosomes, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade bacteria @cite_21 . Mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. Such cooperative networks grow faster than selfish autocatalytic cycles indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation @cite_14 . Through billions of years of prebiotic molecular selection and evolution, nature has produced a basic set of molecules. By combining these simple elements, natural processes are capable of fashioning an enormously diverse range of fabrication units, which can further self-organize into refined structures, materials and molecular machines that not only have high precision, flexibility and error-correction capacity, but are also self-sustaining and evolving. In fact, nature shows a strong preference for bottom-up design. | {
"cite_N": [
"@cite_14",
"@cite_21"
],
"mid": [
"2021492857",
"2044709436"
],
"abstract": [
"In models of early life it has been suggested that life and evolution would be more easily achieved if RNA molecules could interact, rather than function independently; here an in vitro system is designed with several RNA fragments that can assemble into a ribozyme, showing that cooperative networks formed by these fragments outcompete self-catalytic RNA fragments.",
"Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell's membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly. How could such a process be described as \"algorithmic?\" The key word in the first sentence is automatically. Algorithms automate a series of simple computational tasks. Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth."
]
} |
1503.01913 | 2952930431 | We consider a solution of automata similar to Population Protocols and Network Constructors. The automata (or nodes) move passively in a well-mixed solution and can cooperate by interacting in pairs. Every such interaction may result in an update of the local states of the nodes. Additionally, the nodes may also choose to connect to each other in order to start forming some required structure. We may think of such nodes as the smallest possible programmable pieces of matter. The model that we introduce here is a more applied version of Network Constructors, imposing physical (or geometrical) constraints on the connections. Each node can connect to other nodes only via a very limited number of local ports, therefore at any given time it has only a bounded number of neighbors. Connections are always made at unit distance and are perpendicular to connections of neighboring ports. We show that this restricted model is still capable of forming very practical 2D or 3D shapes. We provide direct constructors for some basic shape construction problems. We then develop new techniques for determining the constructive capabilities of our model. One of the main novelties of our approach, concerns our attempt to overcome the inability of such systems to detect termination. In particular, we exploit the assumptions that the system is well-mixed and has a unique leader, in order to give terminating protocols that are correct with high probability (w.h.p.). This allows us to develop terminating subroutines that can be sequentially composed to form larger modular protocols. One of our main results is a terminating protocol counting the size @math of the system w.h.p.. We then use this protocol as a subroutine in order to develop our universal constructors, establishing that the nodes can self-organize w.h.p. into arbitrarily complex shapes while still detecting termination of the construction. | Systems and solutions inspired by nature have often turned out to be extremely practical and efficient. For example, the bottom-up approach of nature inspires the fabrication of biomaterials by attempting to mimic these phenomena with the aim of creating new and varied structures with novel utilities well beyond the gifts of nature @cite_11 . Moreover, there is already a remarkable amount of work envisioning our future ability to engineer computing and robotic systems by manipulating molecules with nanoscale precision. Ambitious long-term applications include molecular computers @cite_18 and miniature (nano)robots for surgical instrumentation, diagnosis and drug delivery in medical applications (e.g. it has very recently been reported that DNA nanorobots could even kill cancer cells @cite_26 ) and monitoring in extreme conditions (e.g. in toxic environments). However, the road towards this vision passes first through our ability to discover . The gain of developing such a theory will be twofold: It will give some insight to the role (and the mechanisms) of network formation in the complexity of natural processes and it will allow us engineer artificial systems that achieve this complexity. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_11"
],
"mid": [
"1999817784",
"1976815031",
"2006225923"
],
"abstract": [
"The processors of most computers work in series, performing one instruction at a time. This limits their ability to perform certain types of tasks in a reasonable period. An approach based on arrays of simultaneously interacting molecular switches could enable previously intractable computational problems to be solved.",
"We describe an autonomous DNA nanorobot capable of transporting molecular payloads to cells, sensing cell surface inputs for conditional, triggered activation, and reconfiguring its structure for payload delivery. The device can be loaded with a variety of materials in a highly organized fashion and is controlled by an aptamer-encoded logic gate, enabling it to respond to a wide array of cues. We implemented several different logical AND gates and demonstrate their efficacy in selective regulation of nanorobot function. As a proof of principle, nanorobots loaded with combinations of antibody fragments were used in two different types of cell-signaling stimulation in tissue culture. Our prototype could inspire new designs with different selectivities and biologically active payloads for cell-targeting tasks.",
"Two complementary strategies can be used in the fabrication of molecular biomaterials. In the 'top-down' approach, biomaterials are generated by stripping down a complex entity into its component parts (for example, paring a virus particle down to its capsid to form a viral cage). This contrasts with the 'bottom-up' approach, in which materials are assembled molecule by molecule (and in some cases even atom by atom) to produce novel supramolecular architectures. The latter approach is likely to become an integral part of nanomaterials manufacture and requires a deep understanding of individual molecular building blocks and their structures, assembly properties and dynamic behaviors. Two key elements in molecular fabrication are chemical complementarity and structural compatibility, both of which confer the weak and noncovalent interactions that bind building blocks together during self-assembly. Using natural processes as a guide, substantial advances have been achieved at the interface of nanomaterials and biology, including the fabrication of nanofiber materials for three-dimensional cell culture and tissue engineering, the assembly of peptide or protein nanotubes and helical ribbons, the creation of living microlenses, the synthesis of met al nanowires on DNA templates, the fabrication of peptide, protein and lipid scaffolds, the assembly of electronic materials by bacterial phage selection, and the use of radiofrequency to regulate molecular behaviors."
]
} |
1503.01812 | 1905106873 | In privacy-preserving data publishing, approaches using Value Generalization Hierarchies (VGHs) form an important class of anonymization algorithms. VGHs play a key role in the utility of published datasets as they dictate how the anonymization of the data occurs. For categorical attributes, it is imperative to preserve the semantics of the original data in order to achieve a higher utility. Despite this, semantics have not being formally considered in the specification of VGHs. Moreover, there are no methods that allow the users to assess the quality of their VGH. In this paper, we propose a measurement scheme, based on ontologies, to quantitatively evaluate the quality of VGHs, in terms of semantic consistency and taxonomic organization, with the aim of producing higher-quality anonymizations. We demonstrate, through a case study, how our evaluation scheme can be used to compare the quality of multiple VGHs and can help to identify faulty VGHs. | Ontologies are that model the knowledge of a particular domain. They represent a formal and explicit specification of shared conceptualizations of a domain of interest @cite_38 . Since they are usually created from the consensus of multiple experts, they are widely accepted as accurate, impartial representations of a domain. The concepts in ontologies are associated through relationships. The subsumption relationship () constitutes the backbone of an ontology. However, other type of relationships can exist, such as aggregation (), synonymy (), or other application-specific relationships. An example of an ontology can be seen in Appendix . | {
"cite_N": [
"@cite_38"
],
"mid": [
"2137079713"
],
"abstract": [
"Abstract To support the sharing and reuse of formally represented knowledge among AI systems, it is useful to define the common vocabulary in which shared knowledge is represented. A specification of a representational vocabulary for a shared domain of discourse—definitions of classes, relations, functions, and other objects—is called an ontology. This paper describes a mechanism for defining ontologies that are portable over representation systems. Definitions written in a standard format for predicate calculus are translated by a system called Ontolingua into specialized representations, including frame-based systems as well as relational languages. This allows researchers to share and reuse ontologies, while retaining the computational benefits of specialized implementations. We discuss how the translation approach to portability addresses several technical problems. One problem is how to accommodate the stylistic and organizational differences among representations while preserving declarative content. Another is how to translate from a very expressive language into restricted languages, remaining system-independent while preserving the computational efficiency of implemented systems. We describe how these problems are addressed by basing Ontolingua itself on an ontology of domain-independent, representational idioms."
]
} |
1503.01436 | 2279786479 | We study the problem of supervised learning for both binary and multiclass classification from a unified geometric perspective. In particular, we propose a geometric regularization technique to find the submanifold corresponding to a robust estimator of the class probability @math . The regularization term measures the volume of this submanifold, based on the intuition that overfitting produces rapid local oscillations and hence large volume of the estimator. This technique can be applied to regularize any classification function that satisfies two requirements: firstly, an estimator of the class probability can be obtained; secondly, first and second derivatives of the class probability estimator can be calculated. In experiments, we apply our regularization technique to standard loss functions for classification, our RBF-based implementation compares favorably to widely used regularization methods for both binary and multiclass classification. | Our training procedure for finding the optimal graph of a function is, in a general sense, also related to the manifold learning problem @cite_8 @cite_20 @cite_1 @cite_21 @cite_12 @cite_2 . The most closely related work is @cite_21 , which seeks a flat submanifold of Euclidean space that contains a dataset. Again, there are key differences. Since the goal of @cite_21 is dimensionality reduction, their manifold has high codimension, while our functional graph has codimension @math , which may be as low as @math . More importantly, we do not assume that the graph of our target function is a flat (or volume minimizing) submanifold, and we instead flow towards a function whose graph is as flat (or volume minimizing) as possible. In this regard, our work is related to a large body of literature on Morse theory in finite and infinite dimensions, and on mean curvature flow @cite_0 @cite_28 . | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_2",
"@cite_20",
"@cite_12"
],
"mid": [
"2001141328",
"",
"2156838815",
"2097308346",
"",
"2125003829",
"2053186076",
"2077776048"
],
"abstract": [
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
"",
"Abstract We describe a method for recovering the underlying parametrization of scattered data (mi) lying on a manifold M embedded in high-dimensional Euclidean space. The method, Hessian-based locally linear embedding, derives from a conceptual framework of local isometry in which the manifold M, viewed as a Riemannian submanifold of the ambient Euclidean space ℝn, is locally isometric to an open, connected subset Θ of Euclidean space ℝd. Because Θ does not have to be convex, this framework is able to handle a significantly wider class of situations than the original ISOMAP algorithm. The theoretical framework revolves around a quadratic form ℋ(f) = ∫M ∥Hf(m)∥dm defined on functions f : M ↦ ℝ. Here Hf denotes the Hessian of f, and ℋ(f) averages the Frobenius norm of the Hessian over M. To define the Hessian, we use orthogonal coordinates on the tangent planes of M. The key observation is that, if M truly is locally isometric to an open, connected subset of ℝd, then ℋ(f) has a (d + 1)-dimensional null space consisting of the constant functions and a d-dimensional space of functions spanned by the original isometric coordinates. Hence, the isometric coordinates can be recovered up to a linear isometry. Our method may be viewed as a modification of locally linear embedding and our theoretical framework as a modification of the Laplacian eigenmaps framework, where we substitute a quadratic form based on the Hessian in place of one based on the Laplacian.",
"One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"",
"Recently, manifold learning has been widely exploited in pattern recognition, data analysis, and machine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input high-dimensional data lie on an intrinsically low-dimensional Riemannian manifold. The main idea is to formulate the dimensionality reduction problem as a classical problem in Riemannian geometry, that is, how to construct coordinate charts for a given Riemannian manifold? We implement the Riemannian normal coordinate chart, which has been the most widely used in Riemannian geometry, for a set of unorganized data points. First, two input parameters (the neighborhood size k and the intrinsic dimension d) are estimated based on an efficient simplicial reconstruction of the underlying manifold. Then, the normal coordinates are computed to map the input high-dimensional data into a low- dimensional space. Experiments on synthetic data, as well as real-world images, demonstrate that our algorithm can learn intrinsic geometric structures of the data, preserve radial geodesic distances, and yield regular embeddings.",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each data point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in two-dimensional three-dimensional (2D 3D) Euclidean spaces and in higher-dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements."
]
} |
1503.01514 | 2949138836 | Internet services are traditionally priced at flat rates; however, many Internet service providers (ISPs) have recently shifted towards two-part tariffs where a data cap is imposed to restrain data demand from heavy users. Although the two-part tariff could generally increase the revenue for ISPs and has been supported by the US FCC, the role of data cap and its optimal pricing structures are not well understood. In this article, we study the impact of data cap on the optimal two-part pricing schemes for congestion-prone service markets. We model users' demand and preferences over pricing and congestion alternatives and derive the market share and congestion of service providers under a market equilibrium. Based on the equilibrium model, we characterize the two-part structures of the revenue- and welfare-optimal pricing schemes. Our results reveal that 1) the data cap provides a mechanism for ISPs to transition from the flat-rate to pay-as-you-go type of schemes, 2) both the revenue and welfare objectives of the ISP will drive the optimal pricing towards usage-based schemes with diminishing data caps, and 3) the welfare-optimal tariff comprises lower fees than the revenue-optimal counterpart, suggesting that regulators might want to promote usage-based pricing but regulate the lump-sum and per-unit fees. | Early studies of two-part tariff came from economics. Oi @cite_4 first studied the price discrimination via quantity discounts in the monopoly Disneyland market. Calem @cite_7 examined and compared the revenue-optimal prices by a multi-product monopoly and a differentiated oligopoly. Littlechild @cite_25 studied the explicit characterizations of welfare-optimal pricing and the effect of consumption externalities. Scotchmer @cite_2 explored the nature of Nash equilibrium among profit-maximizing shared facilities. However, all of these work were confined to a special case where the data usage cap is set to be zero. Our work studies the impact and dynamic of data cap on the two-part pricing, which generalizes the special case. | {
"cite_N": [
"@cite_4",
"@cite_25",
"@cite_7",
"@cite_2"
],
"mid": [
"1983562450",
"2010572991",
"2032267815",
"2077476645"
],
"abstract": [
"I. Two-part tariffs and a discriminating monopoly, 78.—II. Determination of a uniform two-part tariff, 81.—III. Applications of two-part tariffs, 88.—Appendix: Mathematical derivation of a uniform two-part tariff, 93.",
"This paper provides explicit characteristics of those two-part tariffs which maximize profit and consumers' plus producer's surplus. The effect of consumption externalities (as in telecommunications systems) is then explored. The characterizations are in terms of elasticities of demand with respect to price, income, and the number of other customers in the system.",
"Abstract Two part pricing by a multiproduct monopoly and a differentiated oligopoly are examined and compared. Two part pricing policies are seen to depend on whether products are complements or substitutes and on whether or not the market is segmented. A principle result is that although competition tends to lower unit prices, there is no corresponding tendency for competition to reduce entry fees. The unit pricing rule is related to the Ramsey pricing rule. Oligopoly equilibrium unit prices equal marginal cost when there is one consumer type.",
"We explore how well the market will provide shared facilities which are subject to congestion. It is usually efficient to have multiple facilities because it is more efficient to spend resources on facilities than to endure crowding costs. We assume firms can charge a membership price and a visit price. We present a symmetric Nash equilibrium in these two prices. We show that if a number of firms is large, the membership price will be small. Thus, the membership price is a measure of market power. When entry occurs in response to positive profit (but such that entry is deterred by the prospect of negative profit in a symmetric Nash equilibrium), the endogenous number of firms is bounded below by one fewer than the efficient number. The fees paid by a client converge to an appropriately defined competitive price as the economy is replicated."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.