aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1402.4010
2130531711
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
Also, in @cite_35 there is no calculation dependency among the spatial blocks. The experimental evaluation is made over multiple cores of one CPU and a GPU, communicated using a master-worker setup.
{ "cite_N": [ "@cite_35" ], "mid": [ "2171863331" ], "abstract": [ "This paper presents a new Geographic Information Systems (GIS) tool to compute the optimal solar-panel positioning maps on large high-resolution Digital Elevation Models (DEMs). In particular, this software finds out (1) the maximum solar energy input that can be captured on a surface located at a specific height on each point of the DEM, and then (2) the optimal tilt and orientation that allow capturing this amount of energy. The radiation and horizon algorithms we developed in previous works were used as baseline for this tool ( in Comput. Phys. Commun. 178(11):800---808, 2008; in Int. J. Geogr. Inf. Sci. 25(4):541---555, 2011). A multi-method approach is analyzed to make the hybrid implementation of this tool especially appropriate for heterogeneous multicore-GPU architectures. The experimental results show a high numerical accuracy with a linear scalability." ] }
1402.4010
2130531711
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
In @cite_21 , the authors present a parallel framework for GIS integration. Based on the principle of spatial dependency, they lower the calculation processing time by backing it with a knowledge database, delivering the heavy calculation load to the parallel back-end if a specific problem instance is not found in the database. There is an additional effort to achieve the presented goals, since the implementation of a fully functional GIS (or thick GIS'' as the authors call it) is required on both the desktop client and in the parallel environment.
{ "cite_N": [ "@cite_21" ], "mid": [ "1997875301" ], "abstract": [ "Complex spatial control problems can be computationally intensive. Timely response in urgent spatial control situations such as wildfire control poses great challenges for the efficient solving of spatial control problems. Web-based and service-oriented architectures of integrating geographic information system (GIS) clients and parallel computing resources have been suggested as an effective paradigm to solve computationally intensive spatial problems. Such real-time coupling framework is highly dependent upon interactivity and on-demand availability of dedicated parallel computing resources appropriate for the problem. We present an approach to enhancing the efficiency of solving spatial control problems while offering another coupling framework of integrating computing resources from desktop GIS and parallel computing environments to alleviate such dependency. Specifically, a model knowledge database is developed to bridge the gap between desktop GIS models and parallel computing resources. Desktop GIS..." ] }
1402.4010
2130531711
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
An agent-based approach for simulating spatial interactions is presented in @cite_44 . The authors' approach decomposes the entire landscape into equally-sized regions, i.e., a spatial-block division as in @cite_5 , which are in turn processed by a different core of a multi-core CPU. This work uses multi-core CPUs instead of a computing cluster.
{ "cite_N": [ "@cite_44", "@cite_5" ], "mid": [ "2061076260", "1967371417" ], "abstract": [ "The computational approach of agent-based models ABMs supports the representation of interactions among spatially situated individuals as a decentralized process giving rise to space–time complexity in geographic systems. To cope with the computational complexity of these models, this article proposes a parallel approach that leverages the power of multicore systems, as these architectures have quickly become ubiquitous in high-performance and desktop computing. An ABM of individual-level spatial interaction that simulates information exchange, spatial diffusion of opinion development, and consensus building among decision makers is proposed to demonstrate the advantages of the parallel approach against its sequential counterpart. This study focuses on two key spatial properties of the interaction system of interest, the extent and range of interaction, and examines their influence on the computing performance of the proposed parallel model and the performance scalability of the model as more computing resources are added. Significant influence from these two properties is found and can be attributed to three possible sources of effects, namely the model level, the parallelization level, and the platform level. It is suggested that these effects should be taken into consideration when leveraging multicore computing resources for the development of parallel ABMs.", "This work presents a high-performance algorithm to compute the horizon in very large high-resolution DEMs. We used Stewart's algorithm as the core of our implementation and considered that the horizon has three components: the ground, near, and far horizons. To eliminate the edge-effect, we introduced a multi-resolution halo method. Moreover, we used a new data partition approach, to substantially increase the parallelism in the algorithm. In addition, several optimizations have been applied to considerably reduce the number of arithmetical operations in the core of the algorithm. The experimental results have demonstrated that by applying the above-described contributions, the proposed algorithm is more than twice faster than Stewart's algorithm while maintaining the same accuracy." ] }
1402.4010
2130531711
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
Some years ago, grid computing received the attention of the research community as a way of accessing the extra computational power needed for the spatial analysis of large data sets @cite_36 @cite_43 @cite_42 . However, several obstacles are still preventing this technology from being more widely used. Namely, its adoption requires not only hardware and software compromises with respect to the involved parts, but also a behavioral change at the human level @cite_36 .
{ "cite_N": [ "@cite_36", "@cite_43", "@cite_42" ], "mid": [ "2148988158", "2177802445", "2015987697" ], "abstract": [ "High performance computing has undergone a radical transformation during the past decade. Though monolithic supercomputers continue to be built with significantly increased computing power, geographically distributed computing resources are now routinely linked using high-speed networks to address a broad range of computationally complex problems. These confederated resources are referred to collectively as a computational Grid. Many geographical problems exhibit characteristics that make them candidates for this new model of computing. As an illustration, we describe a spatial statistics problem and demonstrate how it can be addressed using Grid computing strategies. A key element of this application is the development of middleware that handles domain decomposition and coordinates computational functions. We also discuss the development of Grid portals that are designed to help researchers and decision makers access and use geographic information analysis tools.", "\"Cloud\" computing – a relatively recent term, builds on decades of research in virtualization, distributed computing, utility computing, and more recently networking, web and software services. It implies a service oriented architecture, reduced information technology overhead for the end-user, great flexibility, reduced total cost of ownership, on-demand services and many other things. This paper discusses the concept of “cloud” computing, some of the issues it tries to address, related research topics, and a “cloud” implementation available today.", "Cyberinfrastructure (CI) integrates distributed information and communication technologies for coordinated knowledge discovery. The purpose of this article is to develop a CyberGIS framework for the synthesis of CI, geographic information systems (GIS), and spatial analysis (broadly including spatial modeling). This framework focuses on enabling computationally intensive and collaborative geographic problem solving. The article describes new trends in the development and use of CyberGIS while illustrating particular CyberGIS components. Spatial middleware glues CyberGIS components and corresponding services while managing the complexity of generic CI middleware. Spatial middleware, tailored to GIS and spatial analysis, is developed to capture important spatial characteristics of problems through the spatially explicit representation of computing, data, and communication intensity (collectively termed computational intensity), which enables GIS and spatial analysis to locate, allocate, and use CI resources..." ] }
1402.3401
2950281744
Internet censorship is enforced by numerous governments worldwide, however, due to the lack of publicly available information, as well as the inherent risks of performing active measurements, it is often hard for the research community to investigate censorship practices in the wild. Thus, the leak of 600GB worth of logs from 7 Blue Coat SG-9000 proxies, deployed in Syria to filter Internet traffic at a country scale, represents a unique opportunity to provide a detailed snapshot of a real-world censorship ecosystem. This paper presents the methodology and the results of a measurement analysis of the leaked Blue Coat logs, revealing a relatively stealthy, yet quite targeted, censorship. We find that traffic is filtered in several ways: using IP addresses and domain names to block subnets or websites, and keywords or categories to target specific content. We show that keyword-based censorship produces some collateral damage as many requests are blocked even if they do not relate to sensitive content. We also discover that Instant Messaging is heavily censored, while filtering of social media is limited to specific pages. Finally, we show that Syrian users try to evade censorship by using web socks proxies, Tor, VPNs, and BitTorrent. To the best of our knowledge, our work provides the first analytical look into Internet filtering in Syria.
@cite_14 present measurements from an Iranian ISP, analyzing HTTP host-based blocking, keyword filtering, DNS hijacking, and protocol-based throttling, and conclude that the censorship infrastructure heavily relies on centralized equipment. Winter and Lindskog @cite_20 conduct some measurements on traffic routed through Tor bridges relays to understand how China blocks Tor. Also, @cite_12 analyze country-wide Internet outages, in Egypt and Libya, using publicly available data such as BGP inter-domain routing control plane data.
{ "cite_N": [ "@cite_14", "@cite_12", "@cite_20" ], "mid": [ "2186594794", "2167994609", "2111695118" ], "abstract": [ "The Iranian government operates one of the largest and most sophisticated Internet censorship regimes in the world, but the mechanisms it employs have received little research attention, primarily due to lack of access to network connections within the country and personal risks to Iranian citizens who take part. In this paper, we examine the status of Internet censorship in Iran based on network measurements conducted from a major Iranian ISP during the lead up to the June 2013 presidential election. We measure the scope of the censorship by probing Alexa’s top 500 websites in 18 different categories. We investigate the technical mechanisms used for HTTP Host‐based blocking, keyword filtering, DNS hijacking, and protocol-based throttling. Finally, we map the network topology of the censorship infrastructure and find evidence that it relies heavily on centralized equipment, a property that might be fruitfully exploited by next generation approaches to censorship circumvention.", "In the first months of 2011, Internet communications were disrupted in several North African countries in response to civilian protests and threats of civil war. In this paper we analyze episodes of these disruptions in two countries: Egypt and Libya. Our analysis relies on multiple sources of large-scale data already available to academic researchers: BGP interdomain routing control plane data; unsolicited data plane traffic to unassigned address space; active macroscopic traceroute measurements; RIR delegation files; and MaxMind's geolocation database. We used the latter two data sets to determine which IP address ranges were allocated to entities within each country, and then mapped these IP addresses of interest to BGP-announced address ranges (prefixes) and origin ASes using publicly available BGP data repositories in the U.S. and Europe. We then analyzed observable activity related to these sets of prefixes and ASes throughout the censorship episodes. Using both control plane and data plane data sets in combination allowed us to narrow down which forms of Internet access disruption were implemented in a given region over time. Among other insights, we detected what we believe were Libya's attempts to test firewall-based blocking before they executed more aggressive BGP-based disconnection. Our methodology could be used, and automated, to detect outages or similar macroscopically disruptive events in other geographic or topological regions.", "Internet censorship in China is not just limited to the web: the Great Firewall of China prevents thousands of potential Tor users from accessing the network. In this paper, we investigate how the ..." ] }
1402.3401
2950281744
Internet censorship is enforced by numerous governments worldwide, however, due to the lack of publicly available information, as well as the inherent risks of performing active measurements, it is often hard for the research community to investigate censorship practices in the wild. Thus, the leak of 600GB worth of logs from 7 Blue Coat SG-9000 proxies, deployed in Syria to filter Internet traffic at a country scale, represents a unique opportunity to provide a detailed snapshot of a real-world censorship ecosystem. This paper presents the methodology and the results of a measurement analysis of the leaked Blue Coat logs, revealing a relatively stealthy, yet quite targeted, censorship. We find that traffic is filtered in several ways: using IP addresses and domain names to block subnets or websites, and keywords or categories to target specific content. We show that keyword-based censorship produces some collateral damage as many requests are blocked even if they do not relate to sensitive content. We also discover that Instant Messaging is heavily censored, while filtering of social media is limited to specific pages. Finally, we show that Syrian users try to evade censorship by using web socks proxies, Tor, VPNs, and BitTorrent. To the best of our knowledge, our work provides the first analytical look into Internet filtering in Syria.
Nabi @cite_10 uses a publicly available list of blocked websites in Pakistan, checking their accessibility from multiple networks within the country. Results indicate that censorship varies across websites: some are blocked at the DNS level, while others at the HTTP level. Furthermore, Verkamp and Gupta @cite_9 detect censorship technologies in 11 countries, mostly using Planet Labs nodes, and discover DNS-based and router-based filtering. @cite_17 propose an architecture for maintaining a censorship weather report'' about what keywords are filtered over time, while @cite_6 provide an overview of research on censorship resistant systems and a taxonomy of anti-censorship technologies.
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_6", "@cite_17" ], "mid": [ "", "1830741683", "2149920828", "2126053700" ], "abstract": [ "", "Over the years, the Internet has democratized the flow of information. Unfortunately, in parallel, authoritarian regimes and other entities (such as ISPs) for their vested interests have curtailed this flow by partially or fully censoring the web. The policy, mechanism, and extent of this censorship varies from country to country. We present the first study of the cause, effect, and mechanism of web censorship in Pakistan. Specifically, we use a publicly available list of blocked websites and check their accessibility from multiple networks within the country. Our results indicate that the censorship mechanism varies across websites: some are blocked at the DNS level while others at the HTTP level. Interestingly, the government shifted to a centralized, Internet exchange level censorship system during the course of our study, enabling our findings to compare two generations of blocking systems. Furthermore, we report the outcome of a controlled survey to ascertain the mechanisms that are being actively employed by people to circumvent censorship. Finally, we discuss some simple but surprisingly unexplored methods of bypassing restrictions.", "With Twitter and Facebook blocked in China, the stream of information from Chinese domestic social media provides a case study of social media behavior under the influence of active censorship. While much work has looked at efforts to prevent access to information in China (including IP blocking of foreign Web sites or search engine filtering), we present here the first large–scale analysis of political content censorship in social media, i.e. , the active deletion of messages published by individuals. In a statistical analysis of 56 million messages (212,583 of which have been deleted out of 1.3 million checked, more than 16 percent) from the domestic Chinese microblog site Sina Weibo, and 11 million Chinese–language messages from Twitter, we uncover a set a politically sensitive terms whose presence in a message leads to anomalously higher rates of deletion. We also note that the rate of message deletion is not uniform throughout the country, with messages originating in the outlying provinces of Tibet and Qinghai exhibiting much higher deletion rates than those from eastern areas like Beijing.", "The text of this paper has passed across many Internet routers on its way to the reader, but some routers will not pass it along unfettered because of censored words it contains. We present two sets of results: 1) Internet measurements of keyword filtering by the Great “Firewall” of China (GFC); and 2) initial results of using latent semantic analysis as an efficient way to reproduce a blacklist of censored words via probing. Our Internet measurements suggest that the GFC’s keyword filtering is more a panopticon than a firewall, i.e., it need not block every illicit word, but only enough to promote self-censorship. China’s largest ISP, ChinaNET, performed 83.3 of all filtering of our probes, and 99.1 of all filtering that occurred at the first hop past the Chinese border. Filtering occurred beyond the third hop for 11.8 of our probes, and there were sometimes as many as 13 hops past the border to a filtering router. Approximately 28.3 of the Chinese hosts we sent probes to were reachable along paths that were not filtered at all. While more tests are needed to provide a definitive picture of the GFC’s implementation, our results disprove the notion that GFC keyword filtering is a firewall strictly at the border of China’s Internet. While evading a firewall a single time defeats its purpose, it would be necessary to evade a panopticon almost every time. Thus, in lieu of evasion, we propose ConceptDoppler, an architecture for maintaining a censorship “weather report” about what keywords are filtered over time. Probing with potentially filtered keywords is arduous due to the GFC’s complexity and can be invasive if not done efficiently. Just as an understanding of the mixing of gases preceded effective weather reporting, understanding of the relationship between keywords and concepts is essential for tracking Internet censorship. We show that LSA can effectively pare down a corpus of text and cluster filtered keywords for efficient probing, present 122 keywords we discovered by probing, and underscore the need for tracking and studying censorship blacklists by discovering some surprising blacklisted keywords such as l�‡ (conversion rate), �„K— (Mein Kampf), and ýE0(NfT�� (International geological scientific federation (Beijing))." ] }
1402.3401
2950281744
Internet censorship is enforced by numerous governments worldwide, however, due to the lack of publicly available information, as well as the inherent risks of performing active measurements, it is often hard for the research community to investigate censorship practices in the wild. Thus, the leak of 600GB worth of logs from 7 Blue Coat SG-9000 proxies, deployed in Syria to filter Internet traffic at a country scale, represents a unique opportunity to provide a detailed snapshot of a real-world censorship ecosystem. This paper presents the methodology and the results of a measurement analysis of the leaked Blue Coat logs, revealing a relatively stealthy, yet quite targeted, censorship. We find that traffic is filtered in several ways: using IP addresses and domain names to block subnets or websites, and keywords or categories to target specific content. We show that keyword-based censorship produces some collateral damage as many requests are blocked even if they do not relate to sensitive content. We also discover that Instant Messaging is heavily censored, while filtering of social media is limited to specific pages. Finally, we show that Syrian users try to evade censorship by using web socks proxies, Tor, VPNs, and BitTorrent. To the best of our knowledge, our work provides the first analytical look into Internet filtering in Syria.
Also, @cite_4 obtain a built-in list of censored keywords in China's TOM-Skype and run experiments to understand how filtering is operated, while @cite_7 devise a system to locate, download, and analyze the content of millions of Chinese social media posts, before the Chinese government censors them. Finally, Park and Crandall @cite_1 present results from measurements of the filtering of HTTP HTML responses in China, which is based on string matching and TCP reset injection by backbone-level routers. @cite_13 explore the AS-level topology of China's network infrastructure, and probe the firewall to find the locations of filtering devices, finding that even though most filtering occurs in border ASes, choke points also exist in many provincial networks.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_13", "@cite_7" ], "mid": [ "2139387262", "2403415707", "", "2144188763" ], "abstract": [ "We present results from measurements of the filtering of HTTP HTML responses in China, which is based on string matching and TCP reset injection by backbone-level routers. This system, intended mainly for Internet censorship, is a national-scale filter based on intrusion detection system (IDS) technologies. Our results indicate that the Chinese censors discontinued this HTML response filtering for the majority of routes some time between August 2008 and January 2009 (other forms of censorship, including backbone-level GET request filtering, are still in place). In this paper, we give evidence to show that the distributed nature of this filtering system and the problems inherent to distributed filtering are likely among the reasons it was discontinued, in addition to potential traffic load problems. When the censor successfully detected a keyword in our measurements and attempted to reset the connection, their attempt to reset the connection was successful less than 51 of the time, due to late or out-of-sequence resets. In addition to shedding light on why HTML response filtering may have been discontinued by the censors, we document potential sources of uncertainty, which are due to routing and protocol dynamics, that could affect measurements of any form of censorship in any country. Between a single client IP address in China and several contiguous server IP addresses outside China, measurement results can be radically different. This is probably due to either traffic engineering or one node from a bank of IDS systems being chosen based on source IP address. Our data provides a unique opportunity to study a national-scale, distributed filtering system.", "We present an empirical analysis of TOM-Skype censorship and surveillance. TOM-Skype is an Internet telephony and chat program that is a joint venture between TOM Online (a mobile Internet company in China) and Skype Limited. TOM-Skype contains both voice-overIP functionality and a chat client. The censorship and surveillance that we studied for this paper is specific to the chat client and is based on keywords that a user might type into a chat session. We were able to decrypt keyword lists used for censorship and surveillance. We also tracked the lists for a period of time and witnessed changes. Censored keywords range from obscene references, such as Œs o (two girls one cup, the motivation for our title), to specific passages from 2011 China Jasmine Revolution protest instructions, such as ý ™i¦S3eM (McDonald’s in front of Chunxi Road in Chengdu). Surveillance keywords are mostly related to demolitions in Beijing, such asuƒa AEA (Ling Jing Alley demolition). Based on this data, we present five conjectures that we believe to be formal enough to be hypotheses that the Internet censorship research community could potentially answer with more data and appropriate computational and analytic techniques.", "", "We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future—and, as such, seem to clearly expose government intent." ] }
1402.3452
2949604038
We study the complexity of algorithmic problems for matrices that are represented by multi-terminal decision diagrams (MTDD). These are a variant of ordered decision diagrams, where the terminal nodes are labeled with arbitrary elements of a semiring (instead of 0 and 1). A simple example shows that the product of two MTDD-represented matrices cannot be represented by an MTDD of polynomial size. To overcome this deficiency, we extended MTDDs to MTDD_+ by allowing componentwise symbolic addition of variables (of the same dimension) in rules. It is shown that accessing an entry, equality checking, matrix multiplication, and other basic matrix operations can be solved in polynomial time for MTDD_+-represented matrices. On the other hand, testing whether the determinant of a MTDD-represented matrix vanishes PSPACE$-complete, and the same problem is NP-complete for MTDD_+-represented diagonal matrices. Computing a specific entry in a product of MTDD-represented matrices is #P-complete.
MTDDs are also a special case of 2-dimen -sional straight-line programs (SLPs). A (1-dimensional) SLP is a context-free grammar in Chomsky normal form that generates exactly one OBDD. An SLP with @math rules can generate a string of length @math ; therefore an SLP can be seen as a succinct representation of the string it generates. Algorithmic problems that can be solved efficiently (in polynomial time) on SLP-represented strings are for instance equality checking (first shown by Plandowski @cite_29 ) and pattern matching, see @cite_15 for a survey.
{ "cite_N": [ "@cite_29", "@cite_15" ], "mid": [ "1510722254", "2052653988" ], "abstract": [ "We present a polynomial time algorithm for testing if two morphisms are equal on every word of a context-free language. The input to the algorithm are a context-free grammar with constant size productions and two morphisms. The best previously known algorithm had exponential time complexity. Our algorithm can be also used to test in polynomial tiime whether or not n first elements of two sequences of words defined by recurrence formulae are the same. In particular, if the well known 2n conjecture for D0L sequences holds, the algorithm can test in polynomial time equivalence of two D0L sequences.", "Results on algorithmic problems on strings that are given in a compressed form via straightline programs are surveyed. A straight-line program is a context-free grammar that generates exactly one string. In this way, exponential compression rates can be achieved. Among others, we study pattern matching for compressed strings, membership problems for compressed strings in various kinds of formal languages, and the problem of querying compressed strings. Applications in combinatorial group theory and computational topology and to the solution of word equations are discussed as well. Finally, extensions to compressed trees and pictures are considered." ] }
1402.3452
2949604038
We study the complexity of algorithmic problems for matrices that are represented by multi-terminal decision diagrams (MTDD). These are a variant of ordered decision diagrams, where the terminal nodes are labeled with arbitrary elements of a semiring (instead of 0 and 1). A simple example shows that the product of two MTDD-represented matrices cannot be represented by an MTDD of polynomial size. To overcome this deficiency, we extended MTDDs to MTDD_+ by allowing componentwise symbolic addition of variables (of the same dimension) in rules. It is shown that accessing an entry, equality checking, matrix multiplication, and other basic matrix operations can be solved in polynomial time for MTDD_+-represented matrices. On the other hand, testing whether the determinant of a MTDD-represented matrix vanishes PSPACE$-complete, and the same problem is NP-complete for MTDD_+-represented diagonal matrices. Computing a specific entry in a product of MTDD-represented matrices is #P-complete.
Let us finally mention that straight-line programs are also used for the compact representation of other objects, e.g. polynomials @cite_34 , trees @cite_31 , graphs @cite_32 , and regular languages @cite_16 .
{ "cite_N": [ "@cite_31", "@cite_34", "@cite_32", "@cite_16" ], "mid": [ "2148149459", "2011758726", "2021380921", "2029153962" ], "abstract": [ "The complexity of various membership problems for tree automata on compressed trees is analyzed. Two compressed representations are considered: dags, which allow to share identical subtrees in a tree, and straight-line context-free tree grammars, which moreover allow to share identical intermediate parts in a tree. Several completeness results for the classes NL, P, and PSPACE are obtained. Finally, the complexity of the evaluation problem for (structural) XPath queries on trees that are compressed via straight-line context-free tree grammars is investigated.", "Let Q be any algebraic structure and the set of all total programs over Q using the instruction set z ,,-1, z , , x + y, z ,,-x y , z x * y , z -x y . (A program is total if no division by zero occurs during any computation ) Let the equivalence problem for be the problem of deciding for two given programs in whether or not they compute the same funcuon The following results are proved: (1) If Q is an inftmte field (e.g, the rauonal numbers or the complex numbers), then the equwalence problem for is probabilistlcally decidable in polynomml time. The result also holds for programs with no dwlslon instructions and Q an infimte integral domain (e.g., the integers). (2) If Q is a finite field, or if Q is a fimte set of integers of cardmahty _>2, then the equivalence problem is NP-hard. The case when the field Q is finite but its cardinality is a funcuon of the size of the instance to the eqmvalence problem is also considered An example is shown for which a sharp boundary between the classes NP-hard and probabihsticaUy decidable exists (provided they are not identical classes).", "In [Le 82, Le 85, Le 86a, Le 86b] a hierarchical graph model is discussed that allows to exploit the hierarchical description of the graphs for the efficient solution of graph problems. The model is motivated by applications in CAD, and is based on a special form of a graph grammar. The above references contain polynomial time solutions for the hierarchical versions of many classical graph problems. However, there are also graph problems that cannot benefit from the succinctness of hierarchical description of the graphs.", "We consider two formalisms for representing regular languages: constant height pushdown automata and straight line programs for regular expressions. We constructively prove that their sizes are polynomially related. Comparing them with the sizes of finite state automata and regular expressions, we obtain optimal exponential and double exponential gaps, i.e., a more concise representation of regular languages." ] }
1402.3305
196838101
Web applications frequently leverage resources made available by remote web servers. As resources are created, updated, deleted, or moved, these applications face challenges to remain in lockstep with the server's change dynamics. Several approaches exist to help meet this challenge for use cases where "good enough" synchronization is acceptable. But when strict resource coverage or low synchronization latency is required, commonly accepted Web-based solutions remain elusive. This paper details characteristics of an approach that aims at decreasing synchronization latency while maintaining desired levels of accuracy. The approach builds on pushing change notifications and pulling changed resources and it is explored with an experiment based on a DBpedia Live instance.
Synchronization approaches are based on a pull method, a push method, or a hybrid of both. The issue of which approach to use under which circumstances has been the subject of several research endeavors. For example, @cite_3 theoretically compare push and pull approaches for disseminating dynamic web data with an emphasis on a client's temporal coherency requirement. They introduce , a method with a scheduled pull being the default mechanism but where the server also has the capability to push changes if it foresees that the polling client would miss the changes otherwise. Another concept they discuss is where the push method is the default and the server can dynamically allocate push or a pull channels to clients, depending on available resources. @cite_14 explored the boundary conditions where it was optimal to push or pull feeds, depending on frequency of reads and writes. Their work suggests that in order to setup a resource-aware synchronization implementation local decisions on a creator consumer basis are necessary. In this sense, they show that the push method is preferable if the user's consumption frequency is greater than the event creation frequency and pull otherwise.
{ "cite_N": [ "@cite_14", "@cite_3" ], "mid": [ "2117766443", "2169576055" ], "abstract": [ "Near real-time event streams are becoming a key feature of many popular web applications. Many web sites allow users to create a personalized feed by selecting one or more event streams they wish to follow. Examples include Twitter and Facebook, which a user to follow other users' activity, and iGoogle and My Yahoo, which allow users to follow selected RSS streams. How can we efficiently construct a web page showing the latest events from a user's feed? Constructing such a feed must be fast so the page loads quickly, yet reflects recent updates to the underlying event streams. The wide fanout of popular streams (those with many followers) and high skew (fanout and update rates vary widely) make it difficult to scale such applications. We associate feeds with consumers and event streams with producers. We demonstrate that the best performance results from selectively materializing each consumer's feed: events from high-rate producers are retrieved at query time, while events from lower-rate producers are materialized in advance. A formal analysis of the problem shows the surprising result that we can minimize global cost by making local decisions about each producer consumer pair, based on the ratio between a given producer's update rate (how often an event is added to the stream) and a given consumer's view rate (how often the feed is viewed). Our experimental results, using Yahoo!'s web-scale database PNUTS, shows that this hybrid strategy results in the lowest system load (and hence improves scalability) under a variety of workloads.", "An important issue in the dissemination of time-varying Web data such as sports scores and stock prices is the maintenance of temporal coherency. In the case of servers adhering to the HTTP protocol, clients need to frequently pull the data based on the dynamics of the data and a user's coherency requirements. In contrast, servers that possess push capability maintain state information pertaining to clients and push only those changes that are of interest to a user. These two canonical techniques have complementary properties with respect to the level of temporal coherency maintained, communication overheads, state space overheads, and loss of coherency due to (server) failures. In this paper, we show how to combine push and pull-based techniques to achieve the best features of both approaches. Our combined technique tailors the dissemination of data from servers to clients based on 1) the capabilities and load at servers and proxies and 2) clients' coherency requirements. Our experimental results demonstrate that such adaptive data dissemination is essential to meet diverse temporal coherency requirements, to be resilient to failures, and for the efficient and scalable utilization of server and network resources." ] }
1402.3305
196838101
Web applications frequently leverage resources made available by remote web servers. As resources are created, updated, deleted, or moved, these applications face challenges to remain in lockstep with the server's change dynamics. Several approaches exist to help meet this challenge for use cases where "good enough" synchronization is acceptable. But when strict resource coverage or low synchronization latency is required, commonly accepted Web-based solutions remain elusive. This paper details characteristics of an approach that aims at decreasing synchronization latency while maintaining desired levels of accuracy. The approach builds on pushing change notifications and pulling changed resources and it is explored with an experiment based on a DBpedia Live instance.
The Event Notification Protocol @cite_6 specifies requirements for change notifications in relation to WebDAV implementations. introduced by Popitsch and Haslhofer @cite_2 is a change detection and notification framework for Linked Datasets. @cite_20 introduced , a linking framework based on the Web of Data Link Maintenance Protocol http: www4.wiwiss.fu-berlin.de bizer silk wodlmp . Ping the Semantic Web http: pingthesemanticweb.com offers change notification as a web service. All three systems are geared towards Linked Datasets and are not designed as generic resource synchronization frameworks. They further rely on aggregated baseline data to be available in a central service, raising scalability concerns in light of an ever expanding Linked Data cloud. and its extension @cite_17 both provide a lightweight notification approach. However, the approaches are based on subscriptions to individual resources, making them problematic for large resource collections. The HTTP-based Simple Update Protocol (SUP) http: code.google.com p simpleupdateprotocol and the UDP-based Simple Lightweight Announcement Protocol (SLAP) http: joecascio.net joecblog 2009 05 18 announcing-slap provide conceptual mechanisms to notify about change events. However, they both suffer from the lack of acceptance and reference implementations.
{ "cite_N": [ "@cite_20", "@cite_17", "@cite_6", "@cite_2" ], "mid": [ "1491268609", "1587260920", "", "1977095721" ], "abstract": [ "The Web of Data is built upon two simple ideas: Employ the RDF data model to publish structured data on the Web and to create explicit data links between entities within different data sources. This paper presents the Silk --- Linking Framework, a toolkit for discovering and maintaining data links between Web data sources. Silk consists of three components: 1. A link discovery engine, which computes links between data sources based on a declarative specification of the conditions that entities must fulfill in order to be interlinked; 2. A tool for evaluating the generated data links in order to fine-tune the linking specification; 3. A protocol for maintaining data links between continuously changing data sources. The protocol allows data sources to exchange both linksets as well as detailed change information and enables continuous link recomputation. The interplay of all the components is demonstrated within a life science use case.", "In this paper we tackle some pressing obstacles of the emerging Linked Data Web, namely the quality, timeliness and coherence of data, which are prerequisites in order to provide direct end user benefits. We present an approach for complementing the Linked Data Web with a social dimension by extending the well-known Pingback mechanism, which is a technological cornerstone of the blogosphere, towards a Semantic Pingback. It is based on the advertising of an RPC service for propagating typed RDF links between Data Web resources. Semantic Pingback is downwards compatible with conventional Pingback implementations, thus allowing to connect and interlink resources on the Social Web with resources on the Data Web. We demonstrate its usefulness by showcasing use cases of the Semantic Pingback implementations in the semantic wiki OntoWiki and the Linked Data interface for database-backed Web applications Triplify.", "", "The Web of Data has emerged as a way of exposing structured linked data on the Web. It builds on the central building blocks of the Web (URIs, HTTP) and benefits from its simplicity and wide-spread adoption. It does, however, also inherit the unresolved issues such as the broken link problem. Broken links constitute a major challenge for actors consuming Linked Data as they require them to deal with reduced accessibility of data. We believe that the broken link problem is a major threat to the whole Web of Data idea and that both Linked Data consumers and providers will require solutions that deal with this problem. Since no general solutions for fixing such links in the Web of Data have emerged, we make three contributions into this direction: first, we provide a concise definition of the broken link problem and a comprehensive analysis of existing approaches. Second, we present DSNotify, a generic framework able to assist human and machine actors in fixing broken links. It uses heuristic feature comparison and employs a time-interval-based blocking technique for the underlying instance matching problem. Third, we derived benchmark datasets from knowledge bases such as DBpedia and evaluated the effectiveness of our approach with respect to the broken link problem. Our results show the feasibility of a time-interval-based blocking approach for systems that aim at detecting and fixing broken links in the Web of Data." ] }
1402.3332
1862105587
In contrast to today's IP-based host-oriented Internet architecture, Information-Centric Networking (ICN) emphasizes content by making it directly addressable and routable. Named Data Networking (NDN) architecture is an instance of ICN that is being developed as a candidate next-generation Internet architecture. By opportunistically caching content within the network, NDN appears to be well-suited for large-scale content distribution and for meeting the needs of increasingly mobile and bandwidth-hungry applications that dominate today's Internet. One key feature of NDN is the requirement for each content object to be digitally signed by its producer. Thus, NDN should be, in principle, immune to distributing fake (aka "poisoned") content. However, in practice, this poses two challenges for detecting fake content in NDN routers: (1) overhead due to signature verification and certificate chain traversal, and (2) lack of trust context, i.e., determining which public keys are trusted to verify which content. Because of these issues, NDN does not force routers to verify content signatures, which makes the architecture susceptible to content poisoning attacks. This paper explores root causes of, and some cures for, content poisoning attacks in NDN. In the process, it becomes apparent that meaningful mitigation of content poisoning is contingent upon a network-layer trust management architecture, elements of which we construct, while carefully justifying specific design choices. This work represents the initial effort towards comprehensive trust management for NDN.
Some prior research efforts discussed naming in content-oriented networks and its relationship to security. Notably, @cite_31 proposes establishing bindings between three ICN entities : (1) real-world identity coupled with the the producer of each content object, (2) name, and (3) public key used to verify the object signature. Only two of the three possible bindings (real-world identity--name, name--key and real-world identify--key) are required, while the third can be transitively inherited. However, it is unclear how these bindings can be practically applied in the specific NDN settings.
{ "cite_N": [ "@cite_31" ], "mid": [ "2155527034" ], "abstract": [ "There have been several recent proposals for content-oriented network architectures whose underlying mechanisms are surprisingly similar in spirit, but which differ in many details. In this paper we step back from the mechanistic details and focus only on the area where the these approaches have a fundamental difference: naming. In particular, some designs adopt a hierarchical, human-readable names, whereas others use self-certifying names. When discussing a network architecture, three of the most important requirements are security, scalability, and flexibility. In this paper we examine the two different naming approaches in terms of these three basic goals." ] }
1402.3332
1862105587
In contrast to today's IP-based host-oriented Internet architecture, Information-Centric Networking (ICN) emphasizes content by making it directly addressable and routable. Named Data Networking (NDN) architecture is an instance of ICN that is being developed as a candidate next-generation Internet architecture. By opportunistically caching content within the network, NDN appears to be well-suited for large-scale content distribution and for meeting the needs of increasingly mobile and bandwidth-hungry applications that dominate today's Internet. One key feature of NDN is the requirement for each content object to be digitally signed by its producer. Thus, NDN should be, in principle, immune to distributing fake (aka "poisoned") content. However, in practice, this poses two challenges for detecting fake content in NDN routers: (1) overhead due to signature verification and certificate chain traversal, and (2) lack of trust context, i.e., determining which public keys are trusted to verify which content. Because of these issues, NDN does not force routers to verify content signatures, which makes the architecture susceptible to content poisoning attacks. This paper explores root causes of, and some cures for, content poisoning attacks in NDN. In the process, it becomes apparent that meaningful mitigation of content poisoning is contingent upon a network-layer trust management architecture, elements of which we construct, while carefully justifying specific design choices. This work represents the initial effort towards comprehensive trust management for NDN.
Prior work on Denial of Service (DoS) attacks on NDN includes @cite_33 and @cite_27 . Both results addressed a specific DoS attack type -- Interest Flooding -- based on inundating routers with spurious interest messages. Content poisoning was identified in @cite_19 , which also sketched out some tentative countermeasures. Subsequently, @cite_14 proposed the first concrete (however, only probabilistic) countermeasure based on analyzing exclusion patterns for cached content.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_14", "@cite_33" ], "mid": [ "2337767373", "1488072645", "2327226400", "2068860357" ], "abstract": [ "With the growing realization that current Internet protocols are reaching the limits of their senescence, several ongoing research efforts aim to design potential next-generation Internet architectures. Although they vary in maturity and scope, in order to avoid past pitfalls, these efforts seek to treat security and privacy as fundamental requirements. Resilience to Denialof-Service (DoS) attacks that plague today’s Internet is a major issue for any new architecture and deserves full attention. In this paper, we focus on DoS in Named Data Networking (NDN) – a specific candidate for next-generation Internet architecture designs. By naming data instead of its locations, NDN transforms data into a first-class entity and makes itself an attractive and viable approach to meet the needs for many current and emerging applications. It also incorporates some basic security features that mitigate classes of attacks that are commonly seen today. However, NDN’s resilience to DoS attacks has not been analyzed to-date. This paper represents a first step towards assessment and possible mitigation of DoS in NDN. After identifying and analyzing several new types of attacks, it investigates their variations, effects and counter-measures. This paper also sheds some light on the debate about relative virtues of self-certifying, as opposed to human-readable, names in the context of content-centric networking.", "Distributed Denial of Service (DDoS) attacks are an ongoing problem in today's Internet, where packets from a large number of compromised hosts thwart the paths to the victim site and or overload the victim machines. In a newly proposed future Internet architecture, Named Data Networking (NDN), end users request desired data by sending Interest packets, and the network delivers Data packets upon request only, effectively eliminating many existing DDoS attacks. However, an NDN network can be subject to a new type of DDoS attack, namely Interest packet flooding. In this paper we investigate effective solutions to mitigate Interest flooding. We show that NDN's inherent properties of storing per packet state on each router and maintaining flow balance (i.e., one Interest packet retrieves at most one Data packet) provides the basis for effective DDoS mitigation algorithms. Our evaluation through simulations shows that the solution can quickly and effectively respond and mitigate Interest flooding.", "Named-Data Networking (NDN) is a candidate next-generation Internet architecture designed to address some limitations of the current IP-based Internet. NDN uses the pull model for content distribution, whereby content is first explicitly requested before being delivered. Efficiency is obtained via router- based aggregation of closely spaced requests for popular content and content caching in routers. Although it reduces latency and increases bandwidth utilization, router caching makes the network susceptible to new cache-centric attacks, such as content poisoning. In this paper, we propose a ranking algorithm for cached content that allows routers to distinguish good and (likely) bad content. This ranking is based on statistics collected from consumers' actions following delivery of content objects. Experimental results support our assertion that the proposed ranking algorithm can effectively mitigate content poisoning attacks.", "Content-Centric Networking (CCN) is an emerging networking paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. CCN focuses on content distribution, which is arguably not well served by IP. Named-Data Networking (NDN) is an example of CCN. NDN is also an active research project under the NSF Future Internet Architectures (FIA) program. FIA emphasizes security and privacy from the outset and by design. To be a viable Internet architecture, NDN must be resilient against current and emerging threats. This paper focuses on distributed denial-of-service (DDoS) attacks; in particular we address interest flooding, an attack that exploits key architectural features of NDN. We show that an adversary with limited resources can implement such attack, having a significant impact on network performance. We then introduce Poseidon: a framework for detecting and mitigating interest flooding attacks. Finally, we report on results of extensive simulations assessing proposed countermeasure." ] }
1402.3332
1862105587
In contrast to today's IP-based host-oriented Internet architecture, Information-Centric Networking (ICN) emphasizes content by making it directly addressable and routable. Named Data Networking (NDN) architecture is an instance of ICN that is being developed as a candidate next-generation Internet architecture. By opportunistically caching content within the network, NDN appears to be well-suited for large-scale content distribution and for meeting the needs of increasingly mobile and bandwidth-hungry applications that dominate today's Internet. One key feature of NDN is the requirement for each content object to be digitally signed by its producer. Thus, NDN should be, in principle, immune to distributing fake (aka "poisoned") content. However, in practice, this poses two challenges for detecting fake content in NDN routers: (1) overhead due to signature verification and certificate chain traversal, and (2) lack of trust context, i.e., determining which public keys are trusted to verify which content. Because of these issues, NDN does not force routers to verify content signatures, which makes the architecture susceptible to content poisoning attacks. This paper explores root causes of, and some cures for, content poisoning attacks in NDN. In the process, it becomes apparent that meaningful mitigation of content poisoning is contingent upon a network-layer trust management architecture, elements of which we construct, while carefully justifying specific design choices. This work represents the initial effort towards comprehensive trust management for NDN.
Trust and trust management systems are well studied in the literature, especially, in distributed environments, such as MANETs, ad hoc and wireless sensor networks (WSNs). @cite_25 surveys the state of the art in trust management systems for MANETs. It emphasizes the need to combine the notions of social trust'' with quality-of-service (QoS) trust''. A similar survey can be found in @cite_2 . @cite_17 presents an extensive review of trust management systems in WSNs. Based on unique features of WSNs, trust management system's best practices are derived and state of the art countermeasures are evaluated against them. @cite_28 discusses security challenges in designing WSNs. It distinguishes between the definitions of trust and security, and shows that cryptography is not always the solution for trust management. Instead, techniques from other domains should be included in defining and formalizing trust.
{ "cite_N": [ "@cite_28", "@cite_17", "@cite_25", "@cite_2" ], "mid": [ "2112235623", "2050067204", "2122516858", "1986481844" ], "abstract": [ "SUMMARY The range of applications of wireless sensor networks is so wide that it tends to invade our every day life. In the future, a sensor network will survey our health, our home, the roads we follow, the office or the industry we work in or even the aircrafts we use, in an attempt to enhance our safety. However, the wireless sensor networks themselves are prone to security attacks. The list of security attacks, although already very long, continues to augment impeding the expansion of these networks. The trust management schemes consist of a powerful tool for the detection of unexpected node behaviours (either faulty or malicious). Once misbehaving nodes are detected, their neighbours can use this information to avoid cooperating with them, either for data forwarding, data aggregation or any other cooperative function. A variety of trust models which follow different directions regarding the distribution of measurement functionality, the monitored behaviours and the way measurements are used to calculate define the node’s trustworthiness has been presented in the literature. In this paper, we survey trust models in an attempt to explore the interplay among the implementation requirements, the resource consumption and the achieved security. Our goal is to draw guidelines for the design of deployable trust model designs with respect to the available node and network capabilities and application peculiarities. Copyright © 2010 John Wiley & Sons, Ltd.", "Wireless sensor networks (WSNs) have been proven a useful technology for perceiving information about the physical world and as a consequence has been used in many applications such as measurement of temperature, radiation, flow of liquids, etc. The nature of this kind of technology, and also their vulnerabilities to attacks make the security tools required for them to be considered in a special way. The decision making in a WSN is essential for carrying out certain tasks as it aids sensors establish collaborations. In order to assist this process, trust management systems could play a relevant role. In this paper, we list the best practices that we consider are essential for developing a good trust management system for WSN and make an analysis of the state of the art related to these practices.", "Managing trust in a distributed Mobile Ad Hoc Network (MANET) is challenging when collaboration or cooperation is critical to achieving mission and system goals such as reliability, availability, scalability, and reconfigurability. In defining and managing trust in a military MANET, we must consider the interactions between the composite cognitive, social, information and communication networks, and take into account the severe resource constraints (e.g., computing power, energy, bandwidth, time), and dynamics (e.g., topology changes, node mobility, node failure, propagation channel conditions). We seek to combine the notions of \"social trust\" derived from social networks with \"quality-of-service (QoS) trust\" derived from information and communication networks to obtain a composite trust metric. We discuss the concepts and properties of trust and derive some unique characteristics of trust in MANETs, drawing upon social notions of trust. We provide a survey of trust management schemes developed for MANETs and discuss generally accepted classifications, potential attacks, performance metrics, and trust metrics in MANETs. Finally, we discuss future research areas on trust management in MANETs based on the concept of social and cognitive networks.", "A mobile ad hoc network is a wireless communication network which does not rely on a pre-existing infrastructure or any centralized management. Securing the exchanges in such network is compulsory to guarantee a widespread development of services for this kind of networks. The deployment of any security policy requires the definition of a trust model that defines who trusts who and how. There is a host of research efforts in trust models framework to securing mobile ad hoc networks. The majority of well-known approaches is based on public-key certificates, and gave birth to miscellaneous trust models ranging from centralized models to web-of-trust and distributed certificate authorities. In this paper, we survey and classify the existing trust models that are based on public-key certificates proposed for mobile ad hoc networks, and then we discuss and compare them with respect to some relevant criteria. Also, we have developed analysis and comparison among trust models using stochastic Petri nets in order to measure the performance of each one with what relates to the certification service availability." ] }
1402.3332
1862105587
In contrast to today's IP-based host-oriented Internet architecture, Information-Centric Networking (ICN) emphasizes content by making it directly addressable and routable. Named Data Networking (NDN) architecture is an instance of ICN that is being developed as a candidate next-generation Internet architecture. By opportunistically caching content within the network, NDN appears to be well-suited for large-scale content distribution and for meeting the needs of increasingly mobile and bandwidth-hungry applications that dominate today's Internet. One key feature of NDN is the requirement for each content object to be digitally signed by its producer. Thus, NDN should be, in principle, immune to distributing fake (aka "poisoned") content. However, in practice, this poses two challenges for detecting fake content in NDN routers: (1) overhead due to signature verification and certificate chain traversal, and (2) lack of trust context, i.e., determining which public keys are trusted to verify which content. Because of these issues, NDN does not force routers to verify content signatures, which makes the architecture susceptible to content poisoning attacks. This paper explores root causes of, and some cures for, content poisoning attacks in NDN. In the process, it becomes apparent that meaningful mitigation of content poisoning is contingent upon a network-layer trust management architecture, elements of which we construct, while carefully justifying specific design choices. This work represents the initial effort towards comprehensive trust management for NDN.
Since a single trust metric might not suffice to express trustworthiness of nodes, a multi-dimensional trust management framework is suggested in @cite_9 . Three metrics are used: (1) node collaboration to perform tasks, such as packet forwarding, (2) node behavior, e.g., flagging nodes that flood the network, and (3) correctness of node-disseminated information, e.g., routing updates.
{ "cite_N": [ "@cite_9" ], "mid": [ "1974123199" ], "abstract": [ "Nodes in Mobile Ad hoc Networks (MANETs) are required to relay data packets to enable communication between other nodes that are not in radio range with each other. However, whether for selfish or malicious reasons, a node may fail to cooperate during the network operations or even attempt to disturb them, both of which have been recognized as misbehaviors. Various trust management schemes have been studied to assess the behaviors of nodes so as to detect and mitigate node misbehaviors inMANETs. Most of existing schemes model a node's trustworthiness along a single dimension, combining all of the available evidence to calculate a single, scalar trust metric. A single measure, however, may not be expressive enough to adequately describe a node's trustworthiness in many scenarios. In this paper, we describe a multi-dimensional framework to evaluate the trustworthiness of MANET node from multiple perspectives. Our scheme evaluates trustworthiness from three perspectives: collaboration trust, behavioral trust, and reference trust. Different types of observations are used to independently derive values for these three trust dimensions. We present simulation results that illustrate the effectiveness of the proposed scheme in several scenarios." ] }
1402.3332
1862105587
In contrast to today's IP-based host-oriented Internet architecture, Information-Centric Networking (ICN) emphasizes content by making it directly addressable and routable. Named Data Networking (NDN) architecture is an instance of ICN that is being developed as a candidate next-generation Internet architecture. By opportunistically caching content within the network, NDN appears to be well-suited for large-scale content distribution and for meeting the needs of increasingly mobile and bandwidth-hungry applications that dominate today's Internet. One key feature of NDN is the requirement for each content object to be digitally signed by its producer. Thus, NDN should be, in principle, immune to distributing fake (aka "poisoned") content. However, in practice, this poses two challenges for detecting fake content in NDN routers: (1) overhead due to signature verification and certificate chain traversal, and (2) lack of trust context, i.e., determining which public keys are trusted to verify which content. Because of these issues, NDN does not force routers to verify content signatures, which makes the architecture susceptible to content poisoning attacks. This paper explores root causes of, and some cures for, content poisoning attacks in NDN. In the process, it becomes apparent that meaningful mitigation of content poisoning is contingent upon a network-layer trust management architecture, elements of which we construct, while carefully justifying specific design choices. This work represents the initial effort towards comprehensive trust management for NDN.
@cite_36 proposes a framework for calculating a network entity's reputation score based on previous interactions feedback. In this framework, each service can apply its own reputation scoring functions. It also supports caching of trust evaluation to reduce network overhead, and provides an API for reporting feedback and calculating reputation scores.
{ "cite_N": [ "@cite_36" ], "mid": [ "2105040129" ], "abstract": [ "Many reputation management systems have been developed under the assumption that each entity in the system will use a variant of the same scoring function. Much of the previous work in reputation management has focused on providing robustness and improving performance for a given reputation scheme. In this paper, we present a reputation-based trust management framework that supports the synthesis of trust-related feedback from many different entities while also providing each entity with the flexibility to apply different scoring functions over the same feedback data for customized trust evaluations. We also propose a novel scheme to cache trust values based on recent client activity. To evaluate our approach, we implemented our trust management service and tested it on a realistic application scenario in both LAN and WAN distributed environments. Our results indicate that our trust management service can effectively support multiple scoring functions with low overhead and high availability." ] }
1402.3332
1862105587
In contrast to today's IP-based host-oriented Internet architecture, Information-Centric Networking (ICN) emphasizes content by making it directly addressable and routable. Named Data Networking (NDN) architecture is an instance of ICN that is being developed as a candidate next-generation Internet architecture. By opportunistically caching content within the network, NDN appears to be well-suited for large-scale content distribution and for meeting the needs of increasingly mobile and bandwidth-hungry applications that dominate today's Internet. One key feature of NDN is the requirement for each content object to be digitally signed by its producer. Thus, NDN should be, in principle, immune to distributing fake (aka "poisoned") content. However, in practice, this poses two challenges for detecting fake content in NDN routers: (1) overhead due to signature verification and certificate chain traversal, and (2) lack of trust context, i.e., determining which public keys are trusted to verify which content. Because of these issues, NDN does not force routers to verify content signatures, which makes the architecture susceptible to content poisoning attacks. This paper explores root causes of, and some cures for, content poisoning attacks in NDN. In the process, it becomes apparent that meaningful mitigation of content poisoning is contingent upon a network-layer trust management architecture, elements of which we construct, while carefully justifying specific design choices. This work represents the initial effort towards comprehensive trust management for NDN.
Policymaker @cite_26 is a tool that provides privacy and authenticity for network services. It offers a flexible and unified language for expressing policies and relationships. It also includes a local (per site or network) engine for carrying all trust operations, such as granting access to services.
{ "cite_N": [ "@cite_26" ], "mid": [ "2170496240" ], "abstract": [ "We identify the trust management problem as a distinct and important component of security in network services. Aspects of the trust management problem include formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Existing systems that support security in networked applications, including X.509 and PGP, address only narrow subsets of the overall trust management problem and often do so in a manner that is appropriate to only one application. This paper presents a comprehensive approach to trust management, based on a simple language for specifying trusted actions and trust relationships. It also describes a prototype implementation of a new trust management system, called PolicyMaker, that will facilitate the development of security features in a wide range of network services." ] }
1402.3782
2952815453
We are given a set of @math jobs that have to be executed on a set of @math speed-scalable machines that can vary their speeds dynamically using the energy model introduced in [, FOCS'95]. Every job @math is characterized by its release date @math , its deadline @math , its processing volume @math if @math is executed on machine @math and its weight @math . We are also given a budget of energy @math and our objective is to maximize the weighted throughput, i.e. the total weight of jobs that are completed between their respective release dates and deadlines. We propose a polynomial-time approximation algorithm where the preemption of the jobs is allowed but not their migration. Our algorithm uses a primal-dual approach on a linearized version of a convex program with linear constraints. Furthermore, we present two optimal algorithms for the non-preemptive case where the number of machines is bounded by a fixed constant. More specifically, we consider: (a) the case of identical processing volumes, i.e. @math for every @math and @math , for which we present a polynomial-time algorithm for the unweighted version, which becomes a pseudopolynomial-time algorithm for the weighted throughput version, and (b) the case of agreeable instances, i.e. for which @math if and only if @math , for which we present a pseudopolynomial-time algorithm. Both algorithms are based on a discretization of the problem and the use of dynamic programming.
The multiple machine case where the preemption and the migration of jobs are allowed can be solved in polynomial time in @cite_24 , @cite_15 and @cite_22 . @cite_18 considered the multiple machine problem where the preemption of jobs is allowed but not their migration. They showed that the problem is polynomial-time solvable for agreeable instances when the jobs have the same processing volumes. They have also showed that it becomes strongly NP-hard for general instances even for jobs with equal processing volumes and for this case they proposed an @math -approximation algorithm. For the case where the jobs have arbitrary processing volumes, they showed that the problem is NP-hard even for instances with common release dates and common deadlines. proposed a @math -approximation algorithm for instances with common release dates, or common deadlines, and an @math -approximation algorithm for instances with agreeable deadlines. @cite_21 proposed a @math -approximation algorithm for general instances, where @math is the @math -th Bell number. Recently, the approximation ratio for agreeable instances has been improved to @math in @cite_10 . For the non-preemptive multiple machine energy minimization problem, the only known result is a non-constant approximation algorithm presented in @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_21", "@cite_24", "@cite_15", "@cite_10" ], "mid": [ "", "", "2126119137", "2049060423", "1984630032", "2041438374" ], "abstract": [ "", "", "This paper investigates the problem of scheduling jobs on multiple speed-scaled processors without migration, i.e., we have constant α > 1 such that running a processor at speed s results in energy consumption s#945; per time unit. We consider the general case where each job has a monotonously increasing cost function that penalizes delay. This includes the so far considered cases of deadlines and flow time. For any type of delay cost functions, we obtain the following results: Any β-approximation algorithm for a single processor yields a randomized βBα-approximation algorithm for multiple processors, where Bα is the αth Bell number, that is, the number of partitions of a set of size α. Analogously, we show that any β-competitive online algorithm for a single processor yields a βBα-competitive online algorithm for multiple processors. Finally, we show that any β-approximation algorithm for multiple processors with migration yields a deterministic βBα-approximation algorithm for multiple processors without migration. These facts improve several approximation ratios and lead to new results. For instance, we obtain the first constant factor online and offline approximation algorithm for multiple processors without migration for arbitrary release times, deadlines, and job sizes. All algorithms are based on the surprising fact that we can remove migration with a blowup of Bα in expectation.", "We investigate a very basic problem in dynamic speed scaling where a sequence of jobs, each specified by an arrival time, a deadline and a processing volume, has to be processed so as to minimize energy consumption. Previous work has focused mostly on the setting where a single variable-speed processor is available. In this paper we study multi-processor environments with m parallel variable-speed processors assuming that job migration is allowed, i.e. whenever a job is preempted it may be moved to a different processor. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time. In contrast to a previously known strategy, our algorithm does not resort to linear programming. We develop a fully combinatorial algorithm that relies on repeated maximum flow computations. The approach might be useful to solve other problems in dynamic speed scaling. For the online problem, we extend two algorithms Optimal Available and Average Rate proposed by [16] for the single processor setting. We prove that Optimal Available is αα-competitive, as in the single processor case. Here α>1 is the exponent of the power consumption function. While it is straightforward to extend Optimal Available to parallel processing environments, the competitive analysis becomes considerably more involved. For Average Rate we show a competitiveness of (3 )α 2 + 2α.", "In this paper we investigate dynamic speed scaling, a technique to reduce energy consumption in variable-speed microprocessors. While prior research has focused mostly on single processor environments, in this paper we investigate multiprocessor settings. We study the basic problem of scheduling a set of jobs, each specified by a release date, a deadline and a processing volume, on variable-speed processors so as to minimize the total energy consumption.", "We consider the following offline variant of the speed scaling problem introduced by We are given a set of jobs and we have a variable-speed processor to process them. The higher the processor speed, the higher the energy consumption. Each job is associated with its own release time, deadline, and processing volume. The objective is to find a feasible schedule that minimizes the energy consumption. In contrast to , no preemption of jobs is allowed. Unlike the preemptive version that is known to be in P, the non-preemptive version of speed scaling is strongly NP-hard. In this work, we present a constant factor approximation algorithm for it. The main technical idea is to transform the problem into the unrelated machine scheduling problem with @math -norm objective." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
One model that is similar in spirit to our approach is the NARX RNN NARX stands for Non-linear Auto-Regressive model with eXogeneous inputs @cite_15 . But instead of simplifying the network, it introduces an additional sets of recurrent connections with time lags of @math , @math .. @math time steps. These additional connections help to bridge long time lags, but introduce many additional parameters that make NARX RNN training more difficult and run @math times slower.
{ "cite_N": [ "@cite_15" ], "mid": [ "2032676284" ], "abstract": [ "Abstract A Recurrent Neural Network (RNN) is a powerful connectionist model that can be applied to many challenging sequential problems, including problems that naturally arise in language and speech. However, RNNs are extremely hard to train on problems that have long-term dependencies, where it is necessary to remember events for many timesteps before using them to make a prediction. In this paper we consider the problem of training RNNs to predict sequences that exhibit significant long-term dependencies, focusing on a serial recall task where the RNN needs to remember a sequence of characters for a large number of steps before reconstructing it. We introduce the Temporal-Kernel Recurrent Neural Network (TKRNN), which is a variant of the RNN that can cope with long-term dependencies much more easily than a standard RNN, and show that the TKRNN develops short-term memory that successfully solves the serial recall task by representing the input string with a stable state of its hidden units." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
Long Short-Term Memory (LSTM; Hochreiter:97lstm ) uses a specialized architecture that allows information to be stored in a linear unit called a constant error carousel (CEC) indefinitely. The cell containing the CEC has a set of multiplicative units (gates) connected to other cells that regulate when new information enters the CEC (input gate), when the activation of the CEC is output to the rest of the network (output gate), and when the activation decays or is "forgotten" (forget gate). These networks have been very successful recently in speech and handwriting recognition @cite_12 @cite_10 @cite_7 .
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_12" ], "mid": [ "2122585011", "1499864241", "78022920" ], "abstract": [ "Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance.", "Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) architecture that has been designed to address the vanishing and exploding gradient problems of conventional RNNs. Unlike feedforward neural networks, RNNs have cyclic connections making them powerful for modeling sequences. They have been successfully used for sequence labeling and sequence prediction tasks, such as handwriting recognition, language modeling, phonetic labeling of acoustic frames. However, in contrast to the deep neural networks, the use of RNNs in speech recognition has been limited to phone recognition in small scale tasks. In this paper, we present novel LSTM based RNN architectures which make more effective use of model parameters to train acoustic models for large vocabulary speech recognition. We train and compare LSTM, RNN and DNN models at various numbers of parameters and configurations. We show that LSTM models converge quickly and give state of the art speech recognition performance for relatively small sized models.", "A system that could be quickly retrained on different corpora would be of great benefit to speech recognition. Recurrent Neural Networks (RNNs) are able to transfer knowledge by simply storing and then retraining their weights. In this report, we partition the TIDIGITS database into utterances spoken by men, women, boys and girls, and successively retrain a Long Short Term Memory (LSTM) RNN on them. We find that the network rapidly adapts to new subsets of the data, and achieves greater accuracy than when trained on them from scratch. This would be useful for applications requiring either cross corpus adaptation or continually expanding datasets." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
Stacking LSTMs into several layers aims for hierarchical sequence processing. Such a hierarchy, equipped with Connectionist Temporal Classification (CTC; Graves:06icml ), performs simultaneous segmentation and recognition of sequences. Its deep variant currently holds the state-of-the-art result in phoneme recognition on the TIMIT database @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "2950689855" ], "abstract": [ "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
Temporal Transition Hierarchy (TTH; Ring:93 ) incrementally adds high-order neurons in order to build a memory that is used to disambiguate an input at the current time step. This approach can, in principle, bridge time intervals of any length, but with proportionally growing network size. The model was recently improved by adding recurrent connections @cite_16 that prevent it from bloating by reusing the high-level nodes through the recurrent connections.
{ "cite_N": [ "@cite_16" ], "mid": [ "2183208319" ], "abstract": [ "Continual learning is the unending process of learning new things on top of what has already been learned (Ring 1994). Temporal Transition Hierarchies (TTHs) were developed to allow prediction of Markov-k sequences in a way that was consistent with the needs of a continual-learning agent (Ring 1993). However, the algorithm could not learn arbitrary temporal contingencies. This paper describes Recurrent Transition Hierarchies (RTH), a learning method that combines several properties desirable for agents that must learn as they go. In particular, it learns online and incrementally, autonomously discovering new features as learning progresses. It requires no reset or episodes. It has a simple learning rule with update complexity linear in the number of parameters." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
One of the earliest attempts to enable RNNs to handle long-term dependencies is the Reduced Description Network @cite_8 @cite_5 . It uses leaky neurons whose activation changes only a bit in response to its inputs. This technique was recently picked up by Echo State Networks (ESN; jaeger:techreport2002 ).
{ "cite_N": [ "@cite_5", "@cite_8" ], "mid": [ "2067516917", "2128499899" ], "abstract": [ "Abstract In algorithmic music composition, a simple technique involves selecting notes sequentially according to a transition table that specifies the probability of the next note as a function of the previous context. An extension of this transition-table approach is described, using a recurrent autopredictive connectionist network called CONCERT. CONCERT is trained on a set of pieces with the aim of extracting stylistic regularities. CONCERT can then be used to compose new pieces. A central ingredient of CONCERT is the incorporation of psychologically grounded representations of pitch, duration and harmonic structure. CONCERT was tested on sets of examples artificially generated according to simple rules and was shown to learn the underlying structure, even where other approaches failed. In larger experiments, CONCERT was trained on sets of J. S. Bach pieces and traditional European folk melodies and was then allowed to compose novel melodies. Although the compositions are occasionally pleasant, and are...", "Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is available at any instant. Although variants of back propagation can in principle be used to find structure in sequences, in practice they are not sufficiently powerful to discover arbitrary contingencies, especially those spanning long temporal intervals or involving high order statistics. For example, in designing a connectionist network for music composition, we have encountered the problem that the net is able to learn musical structure that occurs locally in time--e.g., relations among notes within a musical phrase--but not structure that occurs over longer time periods--e.g., relations among phrases. To address this problem, we require a means of constructing a reduced description of the sequence that makes global aspects more explicit or more readily detectable. I propose to achieve this using hidden units that operate with different time constants. Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard back propagation." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
Evolino @cite_17 @cite_11 feeds the input to an RNN (which can be e.g. LSTM to cope with long time lags) and then transforms the RNN outputs to the target sequences via a optimal linear mapping, that is computed analytically by pseudo-inverse. The RNN is trained by an evolutionary algorithm, therefore it does not suffer from the vanishing gradient problem. Evolino outperformed LSTM on a set of synthetic problems and was used to perform complex robotic manipulation @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_11", "@cite_17" ], "mid": [ "1964502429", "118333330", "2147107577" ], "abstract": [ "Tying suture knots is a time-consuming task performed frequently during Minimally Invasive Surgery (MIS). Automating this task could greatly reduce total surgery time for patients. Current solutions to this problem replay manually programmed trajectories, but a more general and robust approach is to use supervised machine learning to smooth surgeon-given training trajectories and generalize from them. Since knottying generally requires a controller with internal memory to distinguish between identical inputs that require different actions at different points along a trajectory, it would be impossible to teach the system using traditional feedforward neural nets or support vector machines. Instead we exploit more powerful, recurrent neural networks (RNNs) with adaptive internal states. Results obtained using LSTM RNNs trained by the recent Evolino algorithm show that this approach can significantly increase the efficiency of suture knot tying in MIS over preprogrammed control.", "An oxygen absorbent comprising a met al powder and a met al halide coated thereon is disclosed.", "In recent years, gradient-based LSTM recurrent neural networks (RNNs) solved many previously RNN-unlearnable tasks. Sometimes, however, gradient information is of little use for training RNNs, due to numerous local minima. For such cases, we present a novel method: EVOlution of systems with LINear Outputs (Evolino). Evolino evolves weights to the nonlinear, hidden nodes of RNNs while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression. If we instead use quadratic programming to maximize the margin, we obtain the first evolutionary recurrent support vector machines. We show that Evolino-based LSTM can solve tasks that Echo State nets (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
A modern theory of why RNNs fail to learn long-term dependencies is that simple gradient descent fails to optimize them correctly. One attempt to mitigate this problem is Hessian Free (HF) optimization @cite_3 , an adapted second-order training method that has been demonstrated to work well with RNNs. It allows RNNs to solve some long-term lag problems that were impossible with stochastic gradient descent. Their performance on rather synthetic, long-term memory benchmarks is approaching the performance of LSTM, though the number of optimization steps in HF-RNN is usually greater. Training networks by HF optimization is an orthogonal approach to the network architecture, so both LSTM and CW-RNN can still benefit from it.
{ "cite_N": [ "@cite_3" ], "mid": [ "1408639475" ], "abstract": [ "In this work we resolve the long-outstanding problem of how to effectively train recurrent neural networks (RNNs) on complex and difficult sequence modeling problems which may contain long-term data dependencies. Utilizing recent advances in the Hessian-free optimization approach (Martens, 2010), together with a novel damping scheme, we successfully train RNNs on two sets of challenging problems. First, a collection of pathological synthetic datasets which are known to be impossible for standard optimization approaches (due to their extremely long-term dependencies), and second, on three natural and highly complex real-world sequence datasets where we find that our method significantly outperforms the previous state-of-the-art method for training neural sequence models: the Long Short-term Memory approach of Hochreiter and Schmidhuber (1997). Additionally, we offer a new interpretation of the generalized Gauss-Newton matrix of Schraudolph (2002) which is used within the HF approach of Martens." ] }
1402.3511
2952276042
Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.
Training RNNs with Kalman filters @cite_18 has shown advantages in bridging long time lags as well, although this approach is computationally unfeasible for larger networks.
{ "cite_N": [ "@cite_18" ], "mid": [ "2169791674" ], "abstract": [ "The author describes some relationships between the extended Kalman filter (EKF) as applied to recurrent net learning and some simpler techniques that are more widely used. In particular, making certain simplifications to the EKF gives rise to an algorithm essentially identical to the real-time recurrent learning (RTRL) algorithm. Since the EKF involves adjusting unit activity in the network, it also provides a principled generalization of the teacher forcing technique. Preliminary simulation experiments on simple finite-state Boolean tasks indicated that the EKF can provide substantial speed-up in number of time steps required for training on such problems when compared with simpler online gradient algorithms. The computational requirements of the EKF are steep, but scale with network size at the same rate as RTRL. >" ] }
1402.3757
2951191789
This paper investigates the relation between three different notions of privacy: identifiability, differential privacy and mutual-information privacy. Under a unified privacy-distortion framework, where the distortion is defined to be the Hamming distance of the input and output databases, we establish some fundamental connections between these three privacy notions. Given a distortion level @math , define @math to be the smallest (best) identifiability level, and @math to be the smallest differential privacy level. We characterize @math and @math , and prove that @math for @math in some range, where @math is a constant depending on the distribution of the original database @math , and diminishes to zero when the distribution of @math is uniform. Furthermore, we show that identifiability and mutual-information privacy are consistent in the sense that given distortion level @math , the mechanism that optimizes the mutual-information privacy also minimizes the identifiability level.
Differential privacy, as an analytical foundation for privacy-preserving data analysis, was developed by a line of work (see, e.g., @cite_16 @cite_27 @cite_1 ). @cite_16 proposed the Laplace mechanism which adds Laplace noise to each query result, with noise amplitude proportional to the global sensitivity of the query function. @cite_4 later generalize the mechanism using the concept of local sensitivity. The notion of @math -differential privacy @cite_1 has also been proposed as a relaxation of @math -differential privacy.
{ "cite_N": [ "@cite_27", "@cite_16", "@cite_1", "@cite_4" ], "mid": [ "2951713802", "2951011752", "2610910029", "2101771965" ], "abstract": [ "This is a paper about private data analysis, in which a trusted curator holding a confidential database responds to real vector-valued queries. A common approach to ensuring privacy for the database elements is to add appropriately generated random noise to the answers, releasing only these noisy responses. In this paper, we investigate various lower bounds on the noise required to maintain different kind of privacy guarantees.", "We present an approach to differentially private computation in which one does not scale up the magnitude of noise for challenging queries, but rather scales down the contributions of challenging records. While scaling down all records uniformly is equivalent to scaling up the noise magnitude, we show that scaling records non-uniformly can result in substantially higher accuracy by bypassing the worst-case requirements of differential privacy for the noise magnitudes. This paper details the data analysis platform wPINQ, which generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a few simple operators (including a non-uniformly scaling Join operator) wPINQ can reproduce (and improve) several recent results on graph analysis and introduce new generalizations (e.g., counting triangles with given degrees). We also show how to integrate probabilistic inference techniques to synthesize datasets respecting more complicated (and less easily interpreted) measurements.", "In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.", "We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians." ] }
1402.3757
2951191789
This paper investigates the relation between three different notions of privacy: identifiability, differential privacy and mutual-information privacy. Under a unified privacy-distortion framework, where the distortion is defined to be the Hamming distance of the input and output databases, we establish some fundamental connections between these three privacy notions. Given a distortion level @math , define @math to be the smallest (best) identifiability level, and @math to be the smallest differential privacy level. We characterize @math and @math , and prove that @math for @math in some range, where @math is a constant depending on the distribution of the original database @math , and diminishes to zero when the distribution of @math is uniform. Furthermore, we show that identifiability and mutual-information privacy are consistent in the sense that given distortion level @math , the mechanism that optimizes the mutual-information privacy also minimizes the identifiability level.
The existing research of differential privacy can be largely classified into two categories: the interactive model where the randomness is added to the result of a query; and the non-interactive model, where the randomness is added to the database before queried. Under the interactive model, a significant body of work has been devoted to privacy--usefulness tradeoff and differentially private mechanisms with accuracy guarantee on each query result have been developed (see, e.g., @cite_5 @cite_28 @cite_14 @cite_21 ). Since the interactive model allows only a limited number of queries to be answered before the privacy is breached, researchers have also studied the non-interactive model, where synthetic databases or contingency tables with differential privacy guarantees were generated. Mechanisms with distortion guarantee for a set of queries to be answered using the synthetic database have been developed (see, e.g., @cite_15 @cite_10 @cite_0 @cite_23 @cite_3 ).
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_21", "@cite_3", "@cite_0", "@cite_23", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "1985310469", "2951691640", "2950207620", "2950884216", "2050164782", "1587575659", "2951448804", "2169570643", "" ], "abstract": [ "We consider statistical data analysis in the interactive setting. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacy-preserving answers to queries as they arrive. Our primary contribution is a new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen. This is the first mechanism with worst-case accuracy guarantees that can answer large numbers of interactive queries and is efficient (in terms of the runtime's dependence on the data universe size). The error is asymptotically in its dependence on the number of participants, and depends only logarithmically on the number of queries being answered. The running time is nearly linear in the size of the data universe. As a further contribution, when we relax the utility requirement and require accuracy only for databases drawn from a rich class of databases, we obtain exponential improvements in running time. Even in this relaxed setting we continue to guarantee privacy for any input database. Only the utility requirement is relaxed. Specifically, we show that when the input database is drawn from a smooth distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes poly-logarithmic in the data universe size. The main technical contributions are the application of multiplicative weights techniques to the differential privacy setting, a new privacy analysis for the interactive setting, and a technique for reducing data dimensionality for databases drawn from smooth distributions.", "We define a new interactive differentially private mechanism -- the median mechanism -- for answering arbitrary predicate queries that arrive online. Relative to fixed accuracy and privacy constraints, this mechanism can answer exponentially more queries than the previously best known interactive privacy mechanism (the Laplace mechanism, which independently perturbs each query result). Our guarantee is almost the best possible, even for non-interactive privacy mechanisms. Conceptually, the median mechanism is the first privacy mechanism capable of identifying and exploiting correlations among queries in an interactive setting. We also give an efficient implementation of the median mechanism, with running time polynomial in the number of queries, the database size, and the domain size. This efficient implementation guarantees privacy for all input databases, and accurate query results for almost all input databases. The dependence of the privacy on the number of queries in this mechanism improves over that of the best previously known efficient mechanism by a super-polynomial factor, even in the non-interactive setting.", "A range counting problem is specified by a set @math of size @math of points in @math , an integer weight @math associated to each point @math , and a range space @math . Given a query range @math , the target output is @math . Range counting for different range spaces is a central problem in Computational Geometry. We study @math -differentially private algorithms for range counting. Our main results are for the range space given by hyperplanes, that is, the halfspace counting problem. We present an @math -differentially private algorithm for halfspace counting in @math dimensions which achieves @math average squared error. This contrasts with the @math lower bound established by the classical result of Dinur and Nissim [PODS 2003] for arbitrary subset counting queries. We also show a matching lower bound on average squared error for any @math -differentially private algorithm for halfspace counting. Both bounds are obtained using discrepancy theory. For the lower bound, we use a modified discrepancy measure and bound approximation of @math -differentially private algorithms for range counting queries in terms of this discrepancy. We also relate the modified discrepancy measure to classical combinatorial discrepancy, which allows us to exploit known discrepancy lower bounds. This approach also yields a lower bound of @math for @math -differentially private orthogonal range counting in @math dimensions, the first known superconstant lower bound for this problem. For the upper bound, we use an approach inspired by partial coloring methods for proving discrepancy upper bounds, and obtain @math -differentially private algorithms for range counting with polynomially bounded shatter function range spaces.", "We present new theoretical results on differentially private data release useful with respect to any target class of counting queries, coupled with experimental results on a variety of real world data sets. Specifically, we study a simple combination of the multiplicative weights approach of [Hardt and Rothblum, 2010] with the exponential mechanism of [McSherry and Talwar, 2007]. The multiplicative weights framework allows us to maintain and improve a distribution approximating a given data set with respect to a set of counting queries. We use the exponential mechanism to select those queries most incorrectly tracked by the current distribution. Combing the two, we quickly approach a distribution that agrees with the data set on the given set of queries up to small error. The resulting algorithm and its analysis is simple, but nevertheless improves upon previous work in terms of both error and running time. We also empirically demonstrate the practicality of our approach on several data sets commonly used in the statistical community for contingency table release.", "Marginal (contingency) tables are the method of choice for government agencies releasing statistical summaries of categorical data. In this paper, we derive lower bounds on how much distortion (noise) is necessary in these tables to ensure the privacy of sensitive data. We extend a line of recent work on impossibility results for private data analysis [9, 12, 13, 15] to a natural and important class of functionalities. Consider a database consisting of n rows (one per individual), each row comprising d binary attributes. For any subset of T attributes of size |T|=k, the marginal table for T has 2k entries; each entry counts how many times in the database a particular setting of these attributes occurs. We provide lower bounds for releasing all d k k-attribute marginal tables under several different notions of privacy. (1) We give efficient polynomial time attacks which allow an adversary to reconstruct sensitive information given insufficiently perturbed marginal table releases. In particular, for a constant k, we obtain a tight bound of Ω(min √n, √dk-1) on the average distortion per entry for any mechanism that releases all k-attribute marginals while providing \"attribute\" privacy (a weak notion implied by most privacy definitions). (2) Our reconstruction attacks require a new lower bound on the least singular value of a random matrix with correlated rows. Let M(k) be a matrix with d k rows formed by taking all possible k-way entry-wise products of an underlying set of d random vectors from 0,1 n. For constant k, we show that the least singular value of M(k) is Ω(√dk) with high probability (the same asymptotic bound as for independent rows). (3) We obtain stronger lower bounds for marginal tables satisfying differential privacy. We give a lower bound of Ω(min √n, √ dk), which is tight for n Ω (dk). We extend our analysis to obtain stronger results for mechanisms that add instance-independent noise and weaker results when k is super-constant.", "Assuming the existence of one-way functions, we show that there is no polynomial-time, differentially private algorithm A that takes a database D ∈ ( 0, 1 d)n and outputs a \"synthetic database\" D all of whose two-way marginals are approximately equal to those of D. (A two-way marginal is the fraction of database rows x ∈ 0, 1 d with a given pair of values in a given pair of columns). This answers a question of (PODS '07), who gave an algorithm running in time poly(n, 2d). Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by , STOC '09) with encodings based on probabilistically checkable proofs. We also present both negative and positive results for generating \"relaxed\" synthetic data, where the fraction of rows in D satisfying a predicate c are estimated by applying c to each row of D and aggregating the results in some way.", "A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of differential privacy , which requires that a mechanism's output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a geometric mechanism @math -- a discrete variant of the simple and well-studied Laplace mechanism -- that is simultaneously expected loss-minimizing for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: every potential user @math , no matter what its side information and preferences, derives as much utility from @math as from interacting with a differentially private mechanism @math that is optimally tailored to @math .", "We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy.", "" ] }
1402.3757
2951191789
This paper investigates the relation between three different notions of privacy: identifiability, differential privacy and mutual-information privacy. Under a unified privacy-distortion framework, where the distortion is defined to be the Hamming distance of the input and output databases, we establish some fundamental connections between these three privacy notions. Given a distortion level @math , define @math to be the smallest (best) identifiability level, and @math to be the smallest differential privacy level. We characterize @math and @math , and prove that @math for @math in some range, where @math is a constant depending on the distribution of the original database @math , and diminishes to zero when the distribution of @math is uniform. Furthermore, we show that identifiability and mutual-information privacy are consistent in the sense that given distortion level @math , the mechanism that optimizes the mutual-information privacy also minimizes the identifiability level.
Arising from legal definitions of privacy, identifiability has also been considered as a notion of privacy. Lee and Clifton @cite_8 and @cite_13 proposed differential identifiability and membership privacy, respectively. Mutual information as a measure of privacy leakage has been widely used in the literature (see, e.g., @cite_25 @cite_22 @cite_2 @cite_11 @cite_7 @cite_9 @cite_24 @cite_26 ), mostly under the context of quantitative information flow and anonymity systems.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_24", "@cite_2", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "", "2097151854", "1630019606", "", "2088517895", "2021238795", "1545192341", "", "", "2166701378" ], "abstract": [ "", "There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and side-channel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate \"small\" leaks that are necessary in practice. The emerging consensus is that quantitative information flow should be founded on the concepts of Shannon entropy and mutual information . But a useful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks x bits of secret information, then x should be useful in calculating bounds on the resulting threat. In this paper, we focus on the threat that an attack will allow the secret to be guessed correctly in one try. With respect to this threat model, we argue that the consensus definitions actually fail to give good security guarantees--the problem is that a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed. We then explore an alternative foundation based on a concept of vulnerability (closely related to Bayes risk ) and which measures uncertainty using Renyi's min-entropy , rather than Shannon entropy.", "We propose a framework in which anonymity protocols are interpreted as particular kinds of channels, and the degree of anonymity provided by the protocol as the converse of the channel's capacity. We also investigate how the adversary can test the system to try to infer the user's identity, and we study how his probability of success depends on the characteristics of the channel. We then illustrate how various notions of anonymity can be expressed in this framework, and show the relation with some definitions of probabilistic anonymity in literature.", "", "We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.", "We focus on the privacy-accuracy tradeoff encountered by a user who wishes to release some data to an analyst, that is correlated with his private data, in the hope of receiving some utility. We rely on a general statistical inference framework, under which data is distorted before its release, according to a probabilistic privacy mechanism designed under utility constraints. Using recent results on maximal correlation and hyper-contractivity of Markov processes, we first propose novel techniques to design utility-aware privacy mechanisms against inference attacks, when only partial statistical knowledge of the prior distribution linking private data and data to be released is available. We then propose optimal privacy mechanisms in the class of additive noise mechanisms, for both continuous and discrete released data, whose design requires only knowledge of second-order moments of the data to be released. We then turn our attention to multi-agent systems, where multiple data releases occur, and use tensorization results of maximal correlation to analyze how privacy guarantees compose after collusion or composition. Finally, we show the relationship between different existing privacy metrics, in particular divergence privacy, and differential privacy.", "Information theory provides a range of useful methods to analyse probability distributions and these techniques have been successfully applied to measure information flow and the loss of anonymity in secure systems. However, previous work has tended to assume that the exact probabilities of every action are known, or that the system is non-deterministic. In this paper, we show that measures of information leakage based on mutual information and capacity can be calculated, automatically, from trial runs of a system alone. We find a confidence interval for this estimate based on the number of possible inputs, observations and samples. We have developed a tool to automatically perform this analysis and we demonstrate our method by analysing a Mixminon anonymous remailer node.", "", "", "Measures for anonymity in systems must be on one hand simple and concise, and on the other hand reflect the realities of real systems. Such systems are heterogeneous, as are the ways they are used, the deployed anonymity measures, and finally the possible attack methods. Implementation quality and topologies of the anonymity measures must be considered as well. We therefore propose a new measure for the anonymity degree, which takes into account possible heterogeneity. We model the effectiveness of single mixes or of mix networks in terms of information leakage and measure it in terms of covert channel capacity. The relationship between the anonymity degree and information leakage is described, and an example is shown" ] }
1402.2941
1520835385
Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003 on PolyU and 0.2 on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.
In the past decade, biometrics such as the iris @cite_34 , face @cite_15 @cite_28 and fingerprint @cite_5 have been investigated using multispectral images for improved accuracy. Recently, there has been an increased interest in multispectral palmprint recognition @cite_40 @cite_14 @cite_22 @cite_8 @cite_43 @cite_36 @cite_29 @cite_42 @cite_18 @cite_25 @cite_37 @cite_3 @cite_20 @cite_32 . Palmprint recognition approaches can be categorized into line-like feature detectors, subspace learning methods and texture based coding techniques @cite_24 . These three categories are not mutually exclusive and their combinations are also possible. Line detection based approaches commonly extract palm lines using edge detectors. @cite_44 proposed a palmprint verification technique based on principal lines. The principal palm lines were extracted using a modified finite Radon transform and a binary edge map was used for representation. However, recognition based solely on palm lines proved insufficient due to their sparse nature and the possibility of different individuals to have highly similar palm lines @cite_29 . Although, line detection can extract palm lines effectively, it may not be equally useful for the extraction of palm veins due to their low contrast and broad structure.
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_29", "@cite_42", "@cite_3", "@cite_44", "@cite_43", "@cite_5", "@cite_15", "@cite_20", "@cite_18", "@cite_8", "@cite_37", "@cite_28", "@cite_32", "@cite_40", "@cite_34", "@cite_25", "@cite_14", "@cite_24" ], "mid": [ "2223075740", "2542237920", "2018340704", "2038568209", "2131801630", "2049306082", "2057093622", "2088554076", "2170946361", "2126562928", "2122684296", "2200980643", "2058600278", "2030270830", "2114110317", "1577382069", "2142358292", "2144025519", "2136783821", "1974900975" ], "abstract": [ "Palmprint is a reliable and accurate trait in the biometric family. To improve the accuracy of the existing palmprint systems, multispectral imaging is used for palmprint recognition. Although image level fusion and matching score level fusion were widely used for multispectral palmprint recognition, feature level fusion was paid less attention. This paper proposes a novel method for multispectral palmprint recognition by the feature level fusion technique. Experimental results on a large public multispectral database show that the proposed method could get better results than image level fusion, and it could get comparable results with matching score level fusion with less cost of storage.", "Multispectral imaging has been effectively used in the fields of computer vision and human verification to analyze information captured from several bands of the electromagnetic spectrum recently. In this paper, a novel Gabor-based palm print identification algorithm on multispectral images is proposed. Various features are extracted from convolving the Gabor kernels with each image sample. Logistic regression method is used to fuse obtained ranks of each spectral band authentication. The proposed algorithm is evaluated on PolyU database which contains palm print images from 400 individuals from each band. The obtained results show robustness of our algorithm in comparison with other works that presented in the literature.", "As a unique and reliable biometric characteristic, palmprint verification has achieved a great success. However, palmprint alone may not be able to meet the increasing demand of highly accurate and robust biometric systems. Recently, palmvein, which refers to the palm feature under near-infrared spectrum, has been attracting much research interest. Since palmprint and palmvein can be captured simultaneously by using specially designed devices, the joint use of palmprint and palmvein features can effectively increase the accuracy, robustness and anti-spoof capability of palm based biometric techniques. This paper presents an online personal verification system by fusing palmprint and palmvein inforA fast palmprint and palmvein recognition systemA fast palmprint and palmvein recognition system quality can vary much, a dynamic fusion scheme which is adaptive to image quality is developed. To increase the anti-spoof capability of the system, a liveness detection method based on the image property is proposed. A comprehensive database of palmprint-palmvein images was established to verify the proposed system, and the experimental results demonstrated that since palmprint and palmvein contain complementary information, much higher accuracy could be achieved by fusing them than using only one of them. In addition, the whole verification procedure can be completed in 1.2s, which implies that the system can work in real time.", "This paper presents an intra-modal fusion environment to integrate multiple raw palm images at low level. Fusion of palmprint instances is performed by wavelet transform and decomposition. To capture the palm characteristics, fused image is convolved with Gabor wavelet transform. The Gabor wavelet feature representation reflects very high dimensional space. To reduce the high dimensionality, ant colony optimization algorithm is applied to select relevant, distinctive and reduced feature set from Gabor responses. Finally, the reduced set of features is trained with support vector machines and accomplished user recognition tasks. For evaluation, CASIA multispectral palmprint database is used. The experimental results reveal that the system is found to be robust and encouraging while variations of classifiers are used. Also a comparative study is presented of the proposed system with a well-known method.", "Palmprint is widely used in personal identification for an accurate and robust recognition. To improve the existing palmprint systems, the proposed system, which is the first on-line multispectral palmprint recognition system ever designed before, uses multispectral capture device to sense images under different illumination, including red, green, blue and infrared. We adopt competitive coding scheme as matching algorithm, which performs well in on-line palmprint recognition. Wavelet-based image fusion method is used as data-level fusion strategy in our scheme. Fused verifications show better effort on motion blurred source images than single channel. Experimental results of fusion images are also useful references for future work on multispectral palmprint recognition.", "In this paper, we propose a novel palmprint verification approach based on principal lines. In feature extraction stage, the modified finite Radon transform is proposed, which can extract principal lines effectively and efficiently even in the case that the palmprint images contain many long and strong wrinkles. In matching stage, a matching algorithm based on pixel-to-area comparison is devised to calculate the similarity between two palmprints, which has shown good robustness for slight rotations and translations of palmprints. The experimental results for the verification on Hong Kong Polytechnic University Palmprint Database show that the discriminability of principal lines is also strong.", "This paper presents a novel technique to identify palmprints of individuals for various purposes including security, access control, forensic applications, identification, etc. Palmprints, known to be more robust as biometrics are being increasingly used in these areas. In this paper the identification of the palmprint of an individual has been done using a transform domain technique where a new transform using the Kronecker product of the existing transforms (DCT and Walsh) is developed and applied to multi-spectral palmprint images. Energy compaction technique in transform domain is applied to reduce the size of feature vector. The properties of both DCT and Walsh transforms are incorporated in the new transform which gives better results than when both the transforms are used individually. The GAR values have been computed for different values of energy considered. The maximum value of GAR obtained is 98.53 for an energy threshold of 99.99 on palmprints under blue illumination. The FAR is found to be 4 .", "We describe the design and development of a prototype whole-hand imaging system. The sensor is based on multispectral technology that is able to provide hand shape, fingerprints and palmprint modalities of a user's hand by a single user interaction with the sensor. A clear advantage of our system over other unimodal sensors for these modalities include: (i) faster acquisition time, (ii) better quality images, and (iii) ability to provide spoof detection. Initial results on a medium-size database show good recognition performance based on individual modalities as well as after fusing multiple fingers and fusing finger and palm. The prototype is being refined in order to improve performance even further.", "This correspondence paper studies face recognition by using hyperspectral imagery in the visible light bands. The spectral measurements over the visible spectrum have different discriminatory information for the task of face identification, and it is found that the absorption bands related to hemoglobin are more discriminative than the other bands. Therefore, feature band selection based on the physical absorption characteristics of face skin is performed, and two feature band subsets are selected. Then, three methods are proposed for hyperspectral face recognition, including whole band (2D)2PCA, single band (2D)2PCA with decision level fusion, and band subset fusion-based (2D)2PCA. A simple yet efficient decision level fusion strategy is also proposed for the latter two methods. To testify the proposed techniques, a hyperspectral face database was established which contains 25 subjects and has 33 bands over the visible light spectrum (0.4-0.72 μm). The experimental results demonstrated that hyperspectral face recognition with the selected feature bands outperforms that by using a single band, using the whole bands, or, interestingly, using the conventional RGB color bands.", "In this paper, we propose to improve the verification performance of a contract-free palmprint recognition system by means of feature- level image registration and pixel-level fusion of multi-spectral palm images. Our method involves image acquisition via a dedicated device under contact-free and multi-spectral environment, preprocessing to locate region of interest (ROI) from each individual hand images, feature-level registration to align ROIs from different spectral images in one sequence and fusion to combine images from multiple spectra. The advantages of the proposed method include better hygiene and higher verification performance. Given a database composed of images from 330 hands, two out of four state of the art fusion strategies offer significant performance gain and the best equal error rate (EER) is 0.5 .", "Palm print is a unique and reliable biometric characteristic with high usability. Many palm print recognition algorithms and systems have been successfully developed in the past decades. Most of the previous works use the white light sources for illumination. Recently, it has been attracting much research attention on developing new biometric systems with both high accuracy and high anti-spoof capability. Multispectral palm print imaging and recognition can be a potential solution to such systems because it can acquire more discriminative information for personal identity recognition. One crucial step in developing such systems is how to determine the minimal number of spectral bands and select the most representative bands to build the multispectral imaging system. This paper presents preliminary studies on feature band selection by analyzing hyper spectral palm print data (420nm 1100nm). Our experiments showed that 2 spectral bands at 700nm and 960nm could provide most discriminate information of palm print. This finding could be used as the guidance for designing multispectral palm print systems in the future.", "Personal identification problem has been a major field of research in recent years. Biometrics-based technologies that exploit fingerprints, iris, face, voice and palmprints, have been in the center of attention to solve this problem. Palmprints can be used instead of fingerprints that have been of the earliest of these biometrics technologies. A palm is covered with the same skin as the fingertips but has a larger surface, giving us more information than the fingertips. The major features of the palm are palm-lines, including principal lines, wrinkles and ridges. Using these lines is one of the most popular approaches towards solving the palmprint recognition problem. Another robust feature is the wavelet energy of palms. In this paper we used a hybrid feature which combines both of these features. Moreover, multispectral analysis is applied to improve the performance of the system. At the end, minimum distance classifier is used to match test images with one of the training samples. The proposed algorithm has been tested on a well-known multispectral palmprint dataset and achieved an average accuracy of 98.8 .", "Abstract-Palmprint has been widely used in personal recognition. To improve the performance of the existing palmprint recognition system, multispectral palmprint recognition system has been proposed and designed. This paper presents a method of representing the multispectral palmprint images by quaternion and extracting features using the quaternion principal components analysis (QPCA) to achieve better performance in recognition. A data acquisition device is employed to capture the palmprint images under Red, Green, Blue and near-infrared (NIR) illuminations in less than 1s. QPCA is used to extract features of multispectral palmprint images. The dissimilarity between two palmprint images is measured by the Euclidean distance. The experiment shows that a higher recognition rate can be achieved when we use QPCA. Given 3000 testing samples from 500 palms, the best GAR is 98.13 .", "Hyperspectral cameras provide useful discriminants for human face recognition that cannot be obtained by other imaging methods. We examine the utility of using near-infrared hyperspectral images for the recognition of faces over a database of 200 subjects. The hyperspectral images were collected using a CCD camera equipped with a liquid crystal tunable filter to provide 31 bands over the near-infrared (0.7 spl mu m-1.0 spl mu m). Spectral measurements over the near-infrared allow the sensing of subsurface tissue structure which is significantly different from person to person, but relatively stable over time. The local spectral properties of human tissue are nearly invariant to face orientation and expression which allows hyperspectral discriminants to be used for recognition over a large range of poses and expressions. We describe a face recognition algorithm that exploits spectral measurements for multiple facial tissue types. We demonstrate experimentally that this algorithm can be used to recognize faces over time in the presence of changes in facial pose and expression.", "Hand biometrics, including fingerprint, palmprint, hand geometry and hand vein pattern, have obtained extensive attention in recent years. Physiologically, skin is a complex multi-layered tissue consisting of various types of components. Optical research suggests that different components appear when the skin is illuminated with light sources of different wavelengths. This motivates us to extend the capability of camera by integrating information from multispectral palm images to a composite representation that conveys richer and denser pattern for recognition. Besides, usability and security of the whole system might be boosted at the same time. In this paper, comparative study of several pixel level multispectral palm image fusion approaches is conducted and several well-established criteria are utilized as objective fusion quality evaluation measure. Among others, Curvelet transform is found to perform best in preserving discriminative patterns from multispectral palm images.", "Ensuring the security of individuals is becoming an increasingly important problem in a variety of applications. Biometrics technology that relies on the physical and or behavior human characteristics is capable of providing the necessary security over the standard forms of identification. Palmprint recognition is a relatively new one. Almost all the current palmprint-recognition systems are mainly based on image captured under visible light. However, multispectral and hyperspectral imaging have been recently used to improve the performance of palmprint identification. In this paper, the MultiSpectral Palmprint (MSP) and HyperSpectral Palmprint (HSP) are integrated in order to construct an efficient multimodal biometric system. The observation vector is based on Principal Components Analysis (PCA). Subsequently, HiddenMarkov Model (HMM) is used for modeling this vector. The proposed scheme is tested and evaluated using 350 users. Our experimental results show the effectiveness and reliability of the proposed system, which brings high identification accuracy rate.", "This paper explores the possibility of using multispectral iris information to enhance the recognition performance of an iris biometric system. Commercial iris recognition systems typically sense the iridal reflection pertaining to the near-infrared (IR) range of the electromagnetic spectrum. This work examines the iris information represented in the visible and IR portion of the spectrum. It is hypothesized that, based on the color of the eye, different components of the iris are highlighted at multiple wavelengths. To this end, an acquisition procedure for obtaining co-registered multispectral iris images associated with the IR, Red, Green and Blue wavelengths of the electromagnetic spectrum, is first discussed. The components of the iris that are revealed in multiple spectral channels wavelengths based on the color of the eye are studied. An adaptive histogram equalization scheme is invoked to enhance the iris structure. The performance of iris recognition across multiple wavelengths is next evaluated. Experiments indicate the potential of using multispectral information to enhance the performance of iris recognition systems.", "Palmprint is a unique and reliable biometric characteristic with high usability. With the increasing demand of highly accurate and robust palmprint authentication system, multispectral imaging has been employed to acquire more discriminative information and increase the antispoof capability of palmprint. This paper presents an online multispectral palmprint system that could meet the requirement of real-time application. A data acquisition device is designed to capture the palmprint images under Blue, Green, Red, and near-infrared (NIR) illuminations in less than 1 s. A large multispectral palmprint database is then established to investigate the recognition performance of each spectral band. Our experimental results show that the red channel achieves the best result, whereas the Blue and Green channels have comparable performance but are slightly inferior to the NIR channel. After analyzing the extracted features from different bands, we propose a score level fusion scheme to integrate the multispectral information. The palmprint verification experiments demonstrated the superiority of multispectral fusion to each single spectrum, which results in both higher verification accuracy and antispoofing capability.", "This paper presents an approach for the personal authentication using rank-level fusion of multispectral palmprints, instead of using multiple biometric modalities and multiple matchers. The rank level fusion involving the non linear combination of hyperbolic tangent functions gives the best recognition rate for the Rank 1 obtained from two types of features, viz., sigmoid and fuzzy. The results of using rank level fusion on the publicly available multispectral palmprint database show the significant improvement in the recognition rate as compared to the individual spectral bands. Recognition rate of 99.4 from sigmoid features and that of 99.2 from fuzzy features based on Rank 1 is the outcome of the hyperbolic tangent nonlinearity.", "Palmprint recognition has been investigated over 10 years. During this period, many different problems related to palmprint recognition have been addressed. This paper provides an overview of current palmprint research, describing in particular capture devices, preprocessing, verification algorithms, palmprint-related fusion, algorithms especially designed for real-time palmprint identification in large databases and measures for protecting palmprint systems and users' privacy. Finally, some suggestion is offered." ] }
1402.2941
1520835385
Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003 on PolyU and 0.2 on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.
A subspace projection captures the global characteristics of a palm by projecting to the most varying (in case of PCA) or the most discriminative (in case of LDA) dimensions. Subspace projection methods include eigenpalm @cite_33 , which globally projects palm images to a PCA space, or fisherpalm wu2003fisherpalms which projects to an LDA space. However, the finer local details are not well preserved and modeled by such subspace projections. @cite_17 fused palmprint and palmvein images and proposed the representation. Unlike the eigenpalm @cite_33 or the fisherpalm wu2003fisherpalms , the representation attempts to preserve the local characteristics as well while projecting onto a subspace. @cite_37 represented multispectral palmprint images as quaternion and applied quaternion PCA to extract features. A nearest neighbor classifier was used for recognition using quaternion vectors. The quaternion model did not prove useful for representing multispectral palm images and demonstrated low recognition accuracy compared to the state-of-the-art techniques. The main reason is that subspaces learned from misaligned palms are unlikely to generate accurate representation of each identity.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_17" ], "mid": [ "2058600278", "2130283969", "2137233321" ], "abstract": [ "Abstract-Palmprint has been widely used in personal recognition. To improve the performance of the existing palmprint recognition system, multispectral palmprint recognition system has been proposed and designed. This paper presents a method of representing the multispectral palmprint images by quaternion and extracting features using the quaternion principal components analysis (QPCA) to achieve better performance in recognition. A data acquisition device is employed to capture the palmprint images under Red, Green, Blue and near-infrared (NIR) illuminations in less than 1s. QPCA is used to extract features of multispectral palmprint images. The dissimilarity between two palmprint images is measured by the Euclidean distance. The experiment shows that a higher recognition rate can be achieved when we use QPCA. Given 3000 testing samples from 500 palms, the best GAR is 98.13 .", "In this paper, we propose a palmprint recognition method based on eigenspace technology. By means of the Karhunen-Loeve transform, the original palmprint images are transformed into a small set of feature space, called \"eigenpalms\", which are the eigenvectors of the training set and can represent the principle components of the palmprints quite well. Then, the eigenpalm features are extracted by projecting a new palmprint image into the subspace spanned by the \"eigenpalms\", and applied to palmprint recognition with a Euclidean distance classifier. Experimental results illustrate the effectiveness of our method in terms of the recognition rate.", "Unimodal analysis of palmprint and palm vein has been investigated for person recognition. However, they are not robust to noise and spoof attacks. In this paper, we present a multimodal personal identification system using palmprint and palm vein images with fusion applied at the image level. The palmprint and palm vein images are fused by a novel integrated line-preserving and contrast-enhancing fusion method. Based on our proposed fusion rule, the modified multiscale edges of palmprint and palm vein images are combined as well as the image contrast and the interaction points (IPs) of the palmprints and vein lines are enhanced. The IPs are novel features obtained in our fused images. A novel palm representation, called \"Laplacianpalm\" feature, is extracted from the fused images by Locality Preserving Projections (LPP). We compare the recognition performance using the unimodal and the proposed fused images. We also compared the proposed \"Laplacianpalm \" approach with the Fisherpalm and Eigenpalm on a large dataset. Experimental results show that the proposed multimodal approach provides a better representation and achieves lower error rates in palm recognition." ] }
1402.2941
1520835385
Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003 on PolyU and 0.2 on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.
Fusion of spectral bands has been demonstrated at feature level. @cite_22 used feature level band fusion for multispectral palmprints. Specifically, a modification of CompCode was combined with the original CompCode and features from the pair of less correlated bands were fused. The results indicated an improvement over image level fusion and were comparable to match-score level fusion. Zhou and Kumar @cite_11 encoded palm vein features by enhancement of vascular patterns and using the Hessian phase information. They showed that a combination of various feature representations can be used for achieving improved performance based on palmvein images. @cite_14 investigated fuzzy and sigmoid features for multispectral palmprints and a rank-level fusion of scores using various strategies. It was observed that a nonlinear fusion function at rank-level was effective for improved recognition performance. @cite_36 used gabor kernels for feature extraction from multispectral palmprints and a rank-level fusion scheme for fusing the outputs from individual band comparisons. One drawback of rank-level fusion is that it assigns fixed weights to the rank outputs of spectral bands, which results in sub-optimal performance.
{ "cite_N": [ "@cite_36", "@cite_14", "@cite_22", "@cite_11" ], "mid": [ "2542237920", "2136783821", "2223075740", "2021442396" ], "abstract": [ "Multispectral imaging has been effectively used in the fields of computer vision and human verification to analyze information captured from several bands of the electromagnetic spectrum recently. In this paper, a novel Gabor-based palm print identification algorithm on multispectral images is proposed. Various features are extracted from convolving the Gabor kernels with each image sample. Logistic regression method is used to fuse obtained ranks of each spectral band authentication. The proposed algorithm is evaluated on PolyU database which contains palm print images from 400 individuals from each band. The obtained results show robustness of our algorithm in comparison with other works that presented in the literature.", "This paper presents an approach for the personal authentication using rank-level fusion of multispectral palmprints, instead of using multiple biometric modalities and multiple matchers. The rank level fusion involving the non linear combination of hyperbolic tangent functions gives the best recognition rate for the Rank 1 obtained from two types of features, viz., sigmoid and fuzzy. The results of using rank level fusion on the publicly available multispectral palmprint database show the significant improvement in the recognition rate as compared to the individual spectral bands. Recognition rate of 99.4 from sigmoid features and that of 99.2 from fuzzy features based on Rank 1 is the outcome of the hyperbolic tangent nonlinearity.", "Palmprint is a reliable and accurate trait in the biometric family. To improve the accuracy of the existing palmprint systems, multispectral imaging is used for palmprint recognition. Although image level fusion and matching score level fusion were widely used for multispectral palmprint recognition, feature level fusion was paid less attention. This paper proposes a novel method for multispectral palmprint recognition by the feature level fusion technique. Experimental results on a large public multispectral database show that the proposed method could get better results than image level fusion, and it could get comparable results with matching score level fusion with less cost of storage.", "This paper investigates some promising approaches for the automated personal identification using contactless palmvein imaging. We firstly present two new palmvein representations, using Hessian phase information from the enhanced vascular patterns in the normalized images and secondly from the orientation encoding of palmvein line-like patterns using localized Radon transform. The comparison and combination of these two palmvein feature representations, along with others in the palmvein literature, is presented for the contactless palmvein identification. We also evaluate the performance from various palmvein representations when the numbers of training samples are varied from minimum. Our experimental results suggest that the proposed representation using localized Radon transform achieves better or similar performance than other alternatives while offering significant computational advantage for online applications. The proposed approach is rigorously evaluated on the CASIA database (100 subjects) and achieves the best equal error rate of 0.28 . Finally, we propose a score level combination strategy to combine the multiple palmvein representations. We achieve consistent improvement in the performance, both from the authentication and recognition experiments, which illustrates the robustness of the proposed schemes." ] }
1402.2941
1520835385
Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003 on PolyU and 0.2 on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.
@cite_25 compared palmprint matching using individual bands and reported that the red band performed better than the near infrared, blue and green bands. A score level fusion of these bands achieved superior performance compared to any single band. Another joint palmline and palmvein approach for multispectral palmprint recognition was proposed by @cite_29 . They designed separate feature extraction methodologies for palm line and palm vein and later used score level fusion for computing the final match. The approach yielded promising results, albeit at the cost of increased complexity. A comparison of different fusion strategies indicates that a score level fusion of multispectral bands is promising and most effective compared to a data, feature or rank-level fusion.
{ "cite_N": [ "@cite_29", "@cite_25" ], "mid": [ "2018340704", "2144025519" ], "abstract": [ "As a unique and reliable biometric characteristic, palmprint verification has achieved a great success. However, palmprint alone may not be able to meet the increasing demand of highly accurate and robust biometric systems. Recently, palmvein, which refers to the palm feature under near-infrared spectrum, has been attracting much research interest. Since palmprint and palmvein can be captured simultaneously by using specially designed devices, the joint use of palmprint and palmvein features can effectively increase the accuracy, robustness and anti-spoof capability of palm based biometric techniques. This paper presents an online personal verification system by fusing palmprint and palmvein inforA fast palmprint and palmvein recognition systemA fast palmprint and palmvein recognition system quality can vary much, a dynamic fusion scheme which is adaptive to image quality is developed. To increase the anti-spoof capability of the system, a liveness detection method based on the image property is proposed. A comprehensive database of palmprint-palmvein images was established to verify the proposed system, and the experimental results demonstrated that since palmprint and palmvein contain complementary information, much higher accuracy could be achieved by fusing them than using only one of them. In addition, the whole verification procedure can be completed in 1.2s, which implies that the system can work in real time.", "Palmprint is a unique and reliable biometric characteristic with high usability. With the increasing demand of highly accurate and robust palmprint authentication system, multispectral imaging has been employed to acquire more discriminative information and increase the antispoof capability of palmprint. This paper presents an online multispectral palmprint system that could meet the requirement of real-time application. A data acquisition device is designed to capture the palmprint images under Blue, Green, Red, and near-infrared (NIR) illuminations in less than 1 s. A large multispectral palmprint database is then established to investigate the recognition performance of each spectral band. Our experimental results show that the red channel achieves the best result, whereas the Blue and Green channels have comparable performance but are slightly inferior to the NIR channel. After analyzing the extracted features from different bands, we propose a score level fusion scheme to integrate the multispectral information. The palmprint verification experiments demonstrated the superiority of multispectral fusion to each single spectrum, which results in both higher verification accuracy and antispoofing capability." ] }
1402.2941
1520835385
Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003 on PolyU and 0.2 on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.
It is worth mentioning that a simple extension of the existing palmprint representations to multispectral palmprints may not fully preserve the features that appear in different bands. For example, a single representation may not be able to extract equally useful features from both lines and veins @cite_29 . Comparative studies show that local orientation features are the recommended choice for palmprint feature extraction @cite_21 .
{ "cite_N": [ "@cite_29", "@cite_21" ], "mid": [ "2018340704", "2034114524" ], "abstract": [ "As a unique and reliable biometric characteristic, palmprint verification has achieved a great success. However, palmprint alone may not be able to meet the increasing demand of highly accurate and robust biometric systems. Recently, palmvein, which refers to the palm feature under near-infrared spectrum, has been attracting much research interest. Since palmprint and palmvein can be captured simultaneously by using specially designed devices, the joint use of palmprint and palmvein features can effectively increase the accuracy, robustness and anti-spoof capability of palm based biometric techniques. This paper presents an online personal verification system by fusing palmprint and palmvein inforA fast palmprint and palmvein recognition systemA fast palmprint and palmvein recognition system quality can vary much, a dynamic fusion scheme which is adaptive to image quality is developed. To increase the anti-spoof capability of the system, a liveness detection method based on the image property is proposed. A comprehensive database of palmprint-palmvein images was established to verify the proposed system, and the experimental results demonstrated that since palmprint and palmvein contain complementary information, much higher accuracy could be achieved by fusing them than using only one of them. In addition, the whole verification procedure can be completed in 1.2s, which implies that the system can work in real time.", "Palmprint images contain rich unique features for reliable human identification, which makes it a very competitive topic in biometric research. A great many different low resolution palmprint recognition algorithms have been developed, which can be roughly grouped into three categories: holistic-based, feature-based, and hybrid methods. The purpose of this article is to provide an updated survey of palmprint recognition methods, and present a comparative study to evaluate the performance of the state-of-the-art palmprint recognition methods. Using the Hong Kong Polytechnic University (HKPU) palmprint database (version 2), we compare the recognition performance of a number of holistic-based (Fisherpalms and DCT+LDA) and local feature-based (competitive code, ordinal code, robust line orientation code, derivative of Gaussian code, and wide line detector) methods, and then investigate the error correlation and score-level fusion performance of different algorithms. After discussing the achievements and limitations of current palmprint recognition algorithms, we conclude with providing several potential research directions for the future." ] }
1402.2941
1520835385
Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003 on PolyU and 0.2 on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.
In this paper, we propose a novel orientation and binary hash table based encoding for robust and efficient multispectral palmprint recognition. The representation is derived from the coefficients of the Nonsubsampled Contourlet Transform (NSCT) which has the advantage of robust directional frequency localization. Unlike existing orientation codes, which apply a directional filter bank directly to a palm image, we propose a two stage filtering approach to extract only the robust directional features. We develop a unified methodology for the extraction of multispectral (line and vein) features. The feature binary encoded into an efficient hash table structure that only requires indexing and summation operations for simultaneous one-to-many matching with an embedded score level fusion of multiple bands. This paper is an extension of our earlier work @cite_7 . Here we give more detailed descriptions and introduce two variants of the proposed matching technique to improve robustness to inter-band misalignments. We perform a more thorough analysis of the pyramidal directional filter pair combination and the effect of varying other parameters. We also implement three existing state-of-the art orientation features @cite_26 @cite_9 @cite_31 and compare their performances to the proposed feature in various experimental settings including a palmprint identification scenario.
{ "cite_N": [ "@cite_9", "@cite_26", "@cite_31", "@cite_7" ], "mid": [ "2160776381", "2013491563", "2140902337", "2154628171" ], "abstract": [ "Palmprint-based personal identification, as a new member in the biometrics family, has become an active research topic in recent years. Although great progress has been made, how to represent palmprint for effective classification is still an open problem. In this paper, we present a novel palmprint representation - ordinal measure, which unifies several major existing palmprint algorithms into a general framework. In this framework, a novel palmprint representation method, namely orthogonal line ordinal features, is proposed. The basic idea of this method is to qualitatively compare two elongated, line-like image regions, which are orthogonal in orientation and generate one bit feature code. A palmprint pattern is represented by thousands of ordinal feature codes. In contrast to the state-of-the-art algorithm reported in the literature, our method achieves higher accuracy, with the equal error rate reduced by 42 for a difficult set, while the complexity of feature extraction is halved.", "There is increasing interest in the development of reliable, rapid and non-intrusive security control systems. Among the many approaches, biometrics such as palmprints provide highly effective automatic mechanisms for use in personal identification. This paper presents a new method for extracting features from palmprints using the competitive coding scheme and angular matching. The competitive coding scheme uses multiple 2-D Gabor filters to extract orientation information from palm lines. This information is then stored in a feature vector called the competitive code. The angular matching with an effective implementation is then defined for comparing the proposed codes, which can make over 9,000 comparisons within 1s. In our testing database of 7,752 palmprint samples from 386 palms, we can achieve a high genuine acceptance rate of 98.4 and a low false acceptance rate of 3 spl times 10 sup -6 . The execution time for the whole process of verification, including preprocessing, feature extraction and final matching, is 1s.", "This paper presents a novel approach of palmprint texture analysis based on the derivative of gaussian filter. In this approach, the palmprint image is respectively preprocessed along horizontal and vertical direction using derivative of gaussian (DoG) Filters. And then the palmprint is encoded according to the sign of the value of each pixel of the filtered images. This code is called DoGCode of the palmprint. The size of DoGCode is 256 bytes. The similarity of two DoGCode is measured using their Hamming distance. This approach is tested on the PolyU Palmprint Database, which containing 7605 samples from 392 palms, and the EER is 0.19 , which is comparable with the existing palmprint recognition methods.", "We propose ‘Contour Code’, a novel representation and binary hash table encoding for multispectral palmprint recognition. We first present a reliable technique for the extraction of a region of interest (ROI) from palm images acquired with non-contact sensors. The Contour Code representation is then derived from the Nonsubsampled Contourlet Transform. A uniscale pyramidal filter is convolved with the ROI followed by the application of a directional filter bank. The dominant directional subband establishes the orientation at each pixel and the index corresponding to this subband is encoded in the Contour Code representation. Unlike existing representations which extract orientation features directly from the palm images, the Contour Code uses a two stage filtering to extract robust orientation features. The Contour Code is binarized into an efficient hash table structure that only requires indexing and summation operations for simultaneous one-to-many matching with an embedded score level fusion of multiple bands. We quantitatively evaluate the accuracy of the ROI extraction by comparison with a manually produced ground truth. Multispectral palmprint verification results on the PolyU and CASIA databases show that the Contour Code achieves an EER reduction upto 50 , compared to state-of-the-art methods." ] }
1402.2871
2951128282
We describe a probabilistic framework for synthesizing control policies for general multi-robot systems, given environment and sensor models and a cost function. Decentralized, partially observable Markov decision processes (Dec-POMDPs) are a general model of decision processes where a team of agents must cooperate to optimize some objective (specified by a shared reward or cost function) in the presence of uncertainty, but where communication limitations mean that the agents cannot share their state, so execution must proceed in a decentralized fashion. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved as a Dec-POMDP. We describe this general model, and show how, in contrast to most existing methods that are specialized to a particular problem class, it can synthesize control policies that use whatever opportunities for coordination are present in the problem, while balancing off uncertainty in outcomes, sensor information, and information about other agents. We use three variations on a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate.
There are several frameworks that have been developed for multi-robot decision making in complex domains. For instance, behavioral methods have been studied for performing task allocation over time in loosely-coupled @cite_26 or tightly-coupled @cite_4 tasks. These are heuristic in nature and make strong assumptions about the type of tasks that will be completed.
{ "cite_N": [ "@cite_26", "@cite_4" ], "mid": [ "2137714578", "2163518623" ], "abstract": [ "ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled subtasks that may have ordering dependencies. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robot's own internal states. ALLIANCE is a fully distributed, behaviour-based architecture that incorporates the use of mathematically-modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup.", "Move Value Estimation for Robot Teams (MVERT) is a robot action selection algorithm for teams performing multiple competing tasks. The goal of MVERT is to select actions for robot team members to maximize the team's joint utility toward overall mission progress in a computationally efficient manner. MVERT is fully distributed, with each robot using information about other teammates to select its action with the greatest value. MVERT selects actions for a robot team to perform multi-task exploration and dynamic target observation. Successful action selection is demonstrated in simulation for exploration and in simulation and on robots for dynamic target observation." ] }
1402.2871
2951128282
We describe a probabilistic framework for synthesizing control policies for general multi-robot systems, given environment and sensor models and a cost function. Decentralized, partially observable Markov decision processes (Dec-POMDPs) are a general model of decision processes where a team of agents must cooperate to optimize some objective (specified by a shared reward or cost function) in the presence of uncertainty, but where communication limitations mean that the agents cannot share their state, so execution must proceed in a decentralized fashion. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved as a Dec-POMDP. We describe this general model, and show how, in contrast to most existing methods that are specialized to a particular problem class, it can synthesize control policies that use whatever opportunities for coordination are present in the problem, while balancing off uncertainty in outcomes, sensor information, and information about other agents. We use three variations on a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate.
Market-based approaches use traded value to establish an optimization framework for task allocation @cite_37 @cite_19 . These approaches have been used to solve real multi-robot problems @cite_15 , but are largely aimed to tightly-coupled tasks, where the robots can communicate through a bidding mechanism.
{ "cite_N": [ "@cite_19", "@cite_37", "@cite_15" ], "mid": [ "2157488300", "2138104929", "2143704222" ], "abstract": [ "Despite more than a decade of experimental work in multi-robot systems, important theoretical aspects of multi-robot coordination mechanisms have, to date, been largely untreated. To address this issue, we focus on the problem of multi-robot task allocation (MRTA). Most work on MRTA has been ad hoc and empirical, with many coordination architectures having been proposed and validated in a proof-of-concept fashion, but infrequently analyzed. With the goal of bringing objective grounding to this important area of research, we present a formal study of MRTA problems. A domain-independent taxonomy of MRTA problems is given, and it is shown how many such problems can be viewed as instances of other, well-studied, optimization problems. We demonstrate how relevant theory from operations research and combinatorial optimization can be used for analysis and greater understanding of existing approaches to task allocation, and show how the same theory can be used in the synthesis of new approaches.", "This paper presents a comparative study between three multirobot coordination schemes that span the spectrum of coordination approaches; a fully centralized approach that can produce optimal solutions, a fully distributed behavioral approach with minimal planned interaction between robots, and a market approach which sits in the middle of the spectrum. Several dimensions for comparison are proposed based on characteristics identified as important to multirobot application domains. Furthermore, simulation results are presented for comparisons along two of the suggested dimension: Number of robots in the team and Heterogeneity of the team. Results spanning different team sizes indicate that the market method compares favorably to the optimal solutions generated by the centralized approach in terms of cost, and compares favorably to the behavioral method in terms of computation time. All three methods are able to improve global cost by accounting for the heterogeneity of the robot team.", "In this paper we address tasks for multirobot teams that require solving a distributed multi-agent planning problem in which the actions of robots are tightly coupled. The uncertainty inherent in these tasks also necessitates persistent tight coordination between teammates throughout execution. Existing approaches to coordination cannot adequately meet the technical demands of such tasks. In response, we have developed a market-based framework, Hoplites, that consists of two novel coordination mechanisms. Passive coordination quickly produces locally-developed solutions while active coordination produces complex team solutions via negotiation between teammates. Robots use the market to efficiently vet candidate solutions and to choose the coordination mechanism that best matches the current demands of the task. In experiments, Hoplites significantly outperforms even its nearest competitors, particularly in the most complex instances of a domain. We also present implementation results on a team of mobile robots." ] }
1402.2871
2951128282
We describe a probabilistic framework for synthesizing control policies for general multi-robot systems, given environment and sensor models and a cost function. Decentralized, partially observable Markov decision processes (Dec-POMDPs) are a general model of decision processes where a team of agents must cooperate to optimize some objective (specified by a shared reward or cost function) in the presence of uncertainty, but where communication limitations mean that the agents cannot share their state, so execution must proceed in a decentralized fashion. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved as a Dec-POMDP. We describe this general model, and show how, in contrast to most existing methods that are specialized to a particular problem class, it can synthesize control policies that use whatever opportunities for coordination are present in the problem, while balancing off uncertainty in outcomes, sensor information, and information about other agents. We use three variations on a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate.
Emery- @cite_33 introduced a (cooperative) game-theoretic formalization of multi-robot systems which resulted in solving a Dec-POMDP. An approximate forward search algorithm was used to generate solutions, but because a (relatively) low-level Dec-POMDP was used scalability was limited. Also, @cite_24 introduce an MDP-based model where a set of robots with controllers that can execute for varying amount of time must cooperate to solve a problem. However, decision-making in their system is centralized.
{ "cite_N": [ "@cite_24", "@cite_33" ], "mid": [ "2295044527", "2141157184" ], "abstract": [ "Markov Decision Processes (MDPs) provide an extensive theoretical background for problems of decision-making under uncertainty. In order to maintain computational tractability, however, real-world problems are typically discretized in states and actions as well as in time. Assuming synchronous state transitions and actions at fixed rates may result in models which are not strictly Markovian, or where agents are forced to idle between actions, losing their ability to react to sudden changes in the environment. In this work, we explore the application of Generalized Semi-Markov Decision Processes (GSMDPs) to a realistic multi-robot scenario. A case study will be presented in the domain of cooperative robotics, where real-time reactivity must be preserved, and synchronous discrete-time approaches are therefore sub-optimal. This case study is tested on a team of real robots, and also in realistic simulation. By allowing asynchronous events to be modeled over continuous time, the GSMDP approach is shown to provide greater solution quality than its discrete-time counterparts, while still being approximately solvable by existing methods.", "In the real world, noisy sensors and limited communication make it difficult for robot teams to coordinate in tightly coupled tasks. Team members cannot simply apply single-robot solution techniques for partially observable problems in parallel because they do not take into account the recursive effect that reasoning about the beliefs of others has on policy generation. Instead, we must turn to a game theoretic approach to model the problem correctly. Partially observable stochastic games (POSGs) provide a solution model for decentralized robot teams, however, this model quickly becomes intractable. In previous work we presented an algorithm for lookahead search in POSGs. Here we present an extension which reduces computation during lookahead by clustering similar observation histories together. We show that by clustering histories which have similar profiles of predicted reward, we can greatly reduce the computation time required to solve a POSG while maintaining a good approximation to the optimal policy. We demonstrate the power of the clustering algorithm in a real-time robot controller as well as for a simple benchmark problem." ] }
1402.2773
2073930901
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.
The original bit-flipping algorithm (BFA) was introduced by Gallager in his seminal work on LDPC codes @cite_8 . Gallager's BFA is a hard-decision algorithm for decoding on the binary symmetric channel (BSC), in which only hard channel bits are available to the decoder. To correct errors, the BFA computes a sum over the adjacent parity-check equations for each bit in the code. If, for any bit, the number of adjacent parity violations exceeds a specified threshold, then the bit is flipped. This process is repeated until all parity checks are satisfied, or until a maximum iteration limit is reached. The BFA has very low complexity since it only requires, in each iteration, a summation over binary parity-check values for each symbol; however the BFA provides weak decoding performance. considered a probabilistic BFA (PBFA) which adds randomness to the bit-flip decision, resulting in improved performance @cite_9 . In PBFA, when a bit's parity-check sum crosses the flip threshold, it is flipped with probability @math . The parameter @math is optimized empirically and is adapted toward 1 during successive iterations.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2106664865", "2128765501" ], "abstract": [ "In this correspondence, a new method for improving hard-decision bit-flipping decoding of low-density parity-check (LDPC) codes is presented. Bits with a number of unsatisfied check sums larger than a predetermined threshold are flipped with a probability p spl les 1 which is independent of the code considered. The probability p is incremented during decoding according to some rule. With a proper choice of the initial p, the proposed improved bit-flipping (BF) algorithm achieves gain not only in performance, but also in average decoding time for signal-to-noise ratio (SNR) values of interest with respect to p = 1.", "A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound." ] }
1402.2773
2073930901
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.
Recently, introduced a Parallel WBF (PWBF) algorithm, which reduces the drawbacks associated with single-bit flipping in the other WBF varieties @cite_10 . In the PWBF algorithm, the maximum (or minimum) @math metric is found within the subset of symbols associated with each parity-check. The authors of @cite_10 also developed a theory relating PWBF to the BP and MS algorithms, and showed that PWBF has performance comparable to IMWBF @cite_6 . In the PWBF algorithm, it is still necessary to find the maximum @math from a set of values, which costs delay, but the set size is significantly reduced compared to the other WBF methods, and it is independent of codeword length. In spite of these improvements, PWBF retains the complex arithmetic associated with IMWBF.
{ "cite_N": [ "@cite_10", "@cite_6" ], "mid": [ "2165439096", "2123025473" ], "abstract": [ "A parallel weighted bit-flipping (PWBF) decoding algorithm for low-density parity-check (LDPC) codes is proposed. Compared to the best known serial weighted bit-flipping decoding, the PWBF decoding converges significantly faster but with little performance penalty. For decoding of finite-geometry LDPC codes, we demonstrate through examples that the proposed PWBF decoding converges in about 5 iterations with performance very close to that of the standard belief-propagation decoding.", "A natural relationship between weighted bit-flipping (WBF) decoding and belief-propagation-like (BP-like) decoding is explored. This understanding can help us develop WBF algorithms from BP-like algorithms. For min-sum decoding, one can find that its WBF algorithm is the algorithm proposed by For BP decoding, we propose a new WBF algorithm and show its performance advantage. The proposed WBF algorithms are parallelized to achieve rapid convergence. Two efficient simulation-based procedures are proposed for the optimization of the associated thresholds." ] }
1402.2773
2073930901
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.
To reduce the arithmetic complexity of bit-flipping algorithms, devised the GDBF algorithm as a gradient-descent optimization model for the ML decoding problem @cite_5 . Based on this model, the authors of @cite_5 obtained single-bit and multi-bit flipping algorithms that require mainly binary operations, similar to the original BFA. The GDBF methods require summation of binary parity-check values, which is less complex than the WBF algorithms that require summation over independently weighted syndrome values. The single-bit version of the GDBF algorithm (S-GDBF) requires a global search to discover the least reliable bit at each iteration. The multi-bit GDBF algorithm (M-GDBF) uses local threshold operations instead of a global search, hence achieving a faster initial convergence. In practice, the M-GDBF algorithm did not always provide stable convergence to the final solution. To improve convergence, the authors of @cite_5 adopted a mode-switching strategy in which M-GDBF decoding is always followed by a phase of S-GDBF decoding, leveraging high-speed in the first phase and accurate convergence in the second.
{ "cite_N": [ "@cite_5" ], "mid": [ "2150266863" ], "abstract": [ "A novel class of bit-flipping (BF) algorithm for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are referred to as gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. The proposed algorithms exhibit better decoding performance than known BF algorithms, such as the weighted BF algorithm or the modified weighted BF algorithm for several LDPC codes." ] }
1402.2773
2073930901
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.
Several researchers proposed alternative GDBF algorithms in order to obtain fully parallel bit-flipping and improved performance. proposed an Adaptive Threshold GDBF (AT-GDBF) algorithm that achieves good performance without the use of mode-switching, allowing for fully-parallel operation @cite_23 . The same authors also introduced an early-stopping condition (ES-AT-GDBF) that significantly reduces the average decoding iterations at lower Signal to Noise Ratio (SNR). Phromsa- proposed a more complex Reliability-Ratio Weighted GDBF algorithm (RRWGDBF) that uses a weighted summation over syndrome components with an adaptive threshold to obtain reduced latency @cite_25 . The RRWGDBF method has the drawback of increased arithmetic complexity because it performs a summation of weighted syndrome components, similar to previous WBF algorithms. proposed an improved multi-bit GDBF algorithm (IGDBF) that performs very close to the H-GDBF algorithm, but requires a global sort operation to determine which bits to flip @cite_11 .
{ "cite_N": [ "@cite_25", "@cite_23", "@cite_11" ], "mid": [ "2061867767", "2082688301", "1493432783" ], "abstract": [ "For LDPC decoding, a class of weighted bit-flipping algorithms is much simpler than a belief propagation algorithm. This work proposes a modified Gradient Descent Bit-Flipping algorithm based on Reliability Ratio with an adaptive threshold to address trade-off between performance and latency. From numerical results, the proposed algorithm achieves lower latency without an expense of performance. It yields average iteration reduction of 15–27 over SNR range from 2.5 dB to 4.5 dB. In addition, it provides better decoding performance gains, i.e. 0.05–0.25 dB over low-to-medium SNR range between 1.5 dB and 4 dB comparing to previous schemes.", "Wireless sensor networks are used in many diverse application scenarios that require the network designer to trade off different factors. Two such factors of importance in many wireless sensor networks are communication reliability and battery life. This paper describes an efficient, low complexity, high throughput channel decoder suited to decoding low-density parity-check (LDPC) codes. LDPC codes have demonstrated excellent error-correcting ability such that a number of recent wireless standards have opted for their inclusion. Hardware realisation of practical LDPC decoders is a challenging area especially when power efficient solutions are needed. Implementation details are given for an LDPC decoding algorithm, termed adaptive threshold bit flipping (ATBF), designed for low complexity and low power operation. The ATBF decoder was implemented in 90 nm CMOS at 0.9 V using a standard cell design flow and was shown to operate at 250 MHz achieving a throughput of 252 Gb s iteration. The decoder area was 0.72 mm2 with a power consumption of 33.14 mW and a very small energy decoded bit figure of 1.3 pJ.", "Gradient Descent Bit Flipping (GDBF) decoding has one of the best decoding performances of the bit flip type decoding algorithms for low density parity check code. Multi-bit type GDBF decoding, however, achieves good error performance only with suitable thresholds. In this paper, we propose multi-bit type GDBF decoding without using thresholds. The proposed method estimates the number of bits to flip from the received soft sequence using the characteristic of the probability density distribution of the additive white Gaussian noise channel. The error performance of the proposed method is almost equal to that of the conventional method despite not using thresholds." ] }
1402.2773
2073930901
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.
In this work, we propose a new Noisy GDBF algorithm with single-bit and multi-bit versions (S-NGDBF and M-NGDBF, respectively). The M-NGDBF algorithm proposed in this work employs a single threshold and also provides an escape from the neighborhood of spurious local maxima, but does not require the mode-switching behavior used in the original M-GDBF. The proposed algorithm also avoids using any sort or maximum-value operations. When using the threshold adaptation procedure borrowed from AT-GDBF, as described in Section , the proposed M-NGDBF achieves performance close to the H-GDBF and IGDBF methods at high SNR, with a similar number of iterations. We also introduce a new method called Smoothed M-NGDBF (SM-NGDBF) that contributes an additional @math gain at the cost of additional iterations. It should be noted that proposed using a small random perturbation in the H-GDBF thresholds @cite_5 ; the NGDBF methods use a larger perturbation in combination with other heuristics to obtain good performance with very low complexity.
{ "cite_N": [ "@cite_5" ], "mid": [ "2150266863" ], "abstract": [ "A novel class of bit-flipping (BF) algorithm for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are referred to as gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. The proposed algorithms exhibit better decoding performance than known BF algorithms, such as the weighted BF algorithm or the modified weighted BF algorithm for several LDPC codes." ] }
1402.2773
2073930901
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.
Because of its reliance on pseudo-random noise and single-bit messages, the proposed NGDBF algorithms bear some resemblance to the family of stochastic iterative decoders that were first introduced by Gaudet and Rapley @cite_16 . One of the authors (Winstead) introduced stochastic decoding for codes with loopy factor graphs @cite_24 , and Sharifi- later demonstrated stochastic decoding for LDPC codes @cite_19 @cite_4 . High throughput stochastic decoders have been more recently demonstrated by Sharifi @cite_3 @cite_21 @cite_18 and by @cite_17 . Stochastic decoders are known to have performance very close to BP, allow for fully-parallel implementations, and use very simple arithmetic while exchanging single-bit messages. They may therefore serve as an appropriate benchmark for comparing complexity against the proposed SM-NGDBF algorithm (an analysis of comparative complexity is presented in Section ).
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_21", "@cite_3", "@cite_24", "@cite_19", "@cite_16", "@cite_17" ], "mid": [ "2097324886", "", "2069839933", "2161057417", "2061964274", "2131239166", "2019751845", "1975528831" ], "abstract": [ "Stochastic decoding is a recently proposed approach for graph-based iterative error control decoding. We present and investigate three hysteresis methods for stochastic decoding on graphs with cycles and show their close relationship with the successive relaxation method. Implementation results demonstrate the tradeoff in bit error rate performance with circuit complexity.", "", "This paper proposes Tracking Forecast Memories (TFMs) as a novel method for implementing re-randomization and de-correlation of stochastic bit streams in stochastic channel decoders. We show that TFMs are able to achieve decoding performance similar to that of the previous re-randomization methods in the literature (i.e., edge memories), but they exhibit much lower hardware complexity. We then present circuit topologies for analog implementation of TFMs.", "This paper proposes majority-based tracking forecast memories (MTFMs) for area efficient high throughput ASIC implementation of stochastic Low-Density Parity-Check (LDPC) decoders. The proposed method is applied for ASIC implementation of a fully parallel stochastic decoder that decodes the (2048, 1723) LDPC code from the IEEE 802.3an (10GBASE-T) standard. The decoder occupies a silicon core area of 6.38 mm2 in CMOS 90 nm technology, achieves a maximum clock frequency of 500 MHz, and provides a maximum core throughput of 61.3 Gb s. The decoder also has good decoding performance and error-floor behavior and provides a bit error rate (BER) of about 4 × 10-13 at Eb N0=5.15 dB. To the best of our knowledge, the implemented decoder is the most area efficient fully parallel soft -decision LDPC decoder reported in the literature.", "A device for controlling timing of fuel supply for starting a gas turbine engine which has a glow plug as an ignition source for fuel is herein disclosed. The device comprises a temperature sensor mounted on the engine for detecting the initial temperature of the glow plug upon application of electricity to the glow plug and generating a signal for starting the engine, which corresponds to the initial temperature of the glow plug, a means for determining time for preheating the glow plug from starting of application of electricity to the glow plug till starting of the engine by the signal and a means for determining timing for starting fuel supply to a combustion chamber of the engine after starting thereof by the signal.", "This letter presents the first successful method for iterative stochastic decoding of state-of-the-art low-density parity-check (LDPC) codes. The proposed method shows the viability of the stochastic approach for decoding LDPC codes on factor graphs. In addition, simulation results for a 200 and a 1024 length LDPC code demonstrate the near-optimal performance of this method with respect to sum-product decoding. The proposed method has a significant potential for high-throughput and or low complexity iterative decoding.", "An iterative decoding architecture based on stochastic computational elements is proposed. Simulation results for a simple low-density parity-check code demonstrate near-optimal performance with respect to a maximum likelihood decoder. The proposed method provides an alternative to analogue decoding for high-speed low-power applications.", "This paper introduces clockless stochastic decoding for high-throughput low-density parity-check (LDPC) decoders. Stochastic computation provides ultra-low-complexity hardware using simple logic gates. Clockless decoding eliminates global clocking, which eases the worst-case timing restrictions of synchronous stochastic decoders. The lack of synchronization might use outdated bits to update outputs in computation nodes; however, it does not significantly affect output probabilities. A timing model of clockless-computation behaviours under a 90 nm CMOS technology is used to simulate the BER performance of the proposed decoding scheme. Based on our models, the proposed decoding scheme significantly reduces error floors due to the \"lock-up\" problem and achieves superior BER performance compared with conventional synchronous stochastic decoders. The timing model includes metastability to verify the affect on BER performance." ] }
1402.2773
2073930901
A modified Gradient Descent Bit Flipping (GDBF) algorithm is proposed for decoding Low Density Parity Check (LDPC) codes on the binary-input additive white Gaussian noise channel. The new algorithm, called Noisy GDBF (NGDBF), introduces a random perturbation into each symbol metric at each iteration. The noise perturbation allows the algorithm to escape from undesirable local maxima, resulting in improved performance. A combination of heuristic improvements to the algorithm are proposed and evaluated. When the proposed heuristics are applied, NGDBF performs better than any previously reported GDBF variant, and comes within 0.5 dB of the belief propagation algorithm for several tested codes. Unlike other previous GDBF algorithms that provide an escape from local maxima, the proposed algorithm uses only local, fully parallelizable operations and does not require computing a global objective function or a sort over symbol metrics, making it highly efficient in comparison. The proposed NGDBF algorithm requires channel state information which must be obtained from a signal to noise ratio (SNR) estimator. Architectural details are presented for implementing the NGDBF algorithm. Complexity analysis and optimizations are also discussed.
In addition to recent work on low-complexity decoding, there has also been some exploration of noise-perturbed decoding using traditional MS and BP algorithms. demonstrated a beneficial effect of noise perturbations for the BP algorithm, using a method called dithered belief propagation @cite_1 . Kameni examined the effect of noise perturbations on MS decoders and found beneficial effects under certain conditions @cite_26 . The authors of @cite_14 offered the conjecture that noise perturbations assist the MS algorithm in escaping from spurious fixed-point attractors, similar to the hypothesis offered in this paper to motivate the NGDBF algorithm.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_1" ], "mid": [ "1991066617", "2045960641", "2110529479" ], "abstract": [ "This paper deals with Low-Density Parity-Check decoders running on noisy hardware. This represents an unconventional paradigm in communication theory, since it is traditionally assumed that the error correction decoder operates on error-free devices and the randomness (in the form of noise and or errors) exists only in the transmission channel. However, with the advent of nanoelectronics, it starts to be widely accepted that the future generations of circuits and systems will need to reliability compute and solve statistical inferences, by making use of unreliable “noisy” components. It is then critical to properly evaluate the robustness of the existing decoders in the presence of an additional source of noise at the circuit level. To this end, we first introduce a new error model approach and carry out the “noisy” density evolution analysis of the fixed-point Min-Sum decoding. Then, for different parameters of the noisy components of the decoder, we determine the range of the signal-to-noise ratio values for which the decoder is able to achieve a target bit error rate performance. Finally, we evaluate the finite-length performance of the Min-Sum and two other Min-Sum-based decoders running on noisy hardware.", "This paper investigates the behavior of the noisy Min-Sum decoder over binary symmetric channels. A noisy decoder is a decoder running on a noisy device, which may introduce errors during the decoding process. We show that in some particular cases, the noise introduce by the device can help the Min-Sum decoder to escape from fixed points attractors, and may actually result in an increased correction capacity with respect to the noiseless decoder. We also reveal the existence of a specific threshold phenomenon, referred to as functional threshold. The behavior of the noisy decoder is demonstrated in the asymptotic limit of the code-length, by using “noisy” density evolution equations, and it is also verified in the finite-length case by Monte-Carlo simulation.", "We introduce two dithered belief propagation decoding algorithms to lower the error floor with a minimal hardware overhead. One of the algorithms can additionally improve the decoding performance in the waterfall region using a large iteration limit but with a negligible increase in the average time complexity." ] }
1402.3044
2952061453
We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of @math items that maximize the total derived utility of all the agents (i.e., in our example we are to pick @math movies that we put on the plane's entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among the chosen, available, ones. We provide a formal specification of the model and provide concrete examples and settings where it is applicable. We show that the problem is hard in general, but we show a number of tractability results for its natural special cases.
@cite_34 define rank-dependent scoring rules. Under standard positional scoring rules, the score of a candidate is the sum of the scores it obtains from all the voters, where the score that a candidate obtains from a given voter depends only on his or her rank in this voter's preference order. Rank-dependent scoring rules generalize this idea as follows. Instead of simply summing up the scores of a given candidate, they apply an OWA operator to the list of the scores that he or she got from the voters. Thus a rank-dependent scoring rule is defined by a scoring vector (a function mapping ranks to scores) and an OWA operator. Here, OWAs replace the sum operator for aggregating the scores coming from different agents , while in our setting they aggregate the scores of different object for a fixed agent.
{ "cite_N": [ "@cite_34" ], "mid": [ "2174198470" ], "abstract": [ "Positional scoring rules in voting compute the score of an alternative by summing the scores for the alternative induced by every vote. This summation principle ensures that all votes contribute equally to the score of an alternative. We relax this assumption and, instead, aggregate scores by taking into account the rank of a score in the ordered list of scores obtained from the votes. This defines a new family of voting rules, rank-dependent scoring rules (RDSRs), based on ordered weighted average (OWA) operators, which, include all scoring rules, and many others, most of which of new. We study some properties of these rules, and show, empirically, that certain RDSRs are less manipulable than Borda voting, across a variety of statistical cultures." ] }
1402.3044
2952061453
We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of @math items that maximize the total derived utility of all the agents (i.e., in our example we are to pick @math movies that we put on the plane's entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among the chosen, available, ones. We provide a formal specification of the model and provide concrete examples and settings where it is applicable. We show that the problem is hard in general, but we show a number of tractability results for its natural special cases.
@cite_5 define a family of committee election rules (which can also be used for multiple referenda) based on the following principle. Each voter specifies his or her preferred committee and each voter's disutility for a committee is given by the Hamming distance between the committee and the voter's preferred one. Then the disutilities of the voters are aggregated using an OWA operator. The committee with the lowest aggregated disutility wins. (In the particular case of the sum operator, the obtained rule is the Bloc committee election rule, while in the case of the minimum, the obtained rule is the Minimax Approval Voting rule; see the work of @cite_43 for the definition and other works for computational discussions @cite_48 @cite_33 @cite_41 .) They obtain a number of hardness and approximability results, which cannot be compared to ours because in their work, again, OWAs are used for aggregating scores coming from different agents .
{ "cite_N": [ "@cite_33", "@cite_41", "@cite_48", "@cite_43", "@cite_5" ], "mid": [ "1859600792", "2181533063", "2117194296", "2098136663", "2270513673" ], "abstract": [ "We consider Approval Voting systems where each voter decides on a subset of candidates he she approves. We focus on the optimization problem of finding the committee of fixed size k, minimizing the maximal Hamming distance from a vote. In this paper we give a PTAS for this problem and hence resolve the open question raised by [AAAI’10]. The result is obtained by adapting the techniques developed by [JACM’02] originally used for the less constrained Closest String problem. The technique relies on extracting information and structural properties of constant size subsets of votes.", "In this work, we initiate a detailed study of the parameterized complexity of Minimax Approval Voting. We demonstrate that the problem is W[2]-hard when parameterized by the size of the committee to be chosen, but does admit a FPT algorithm when parameterized by the number of strings that is more efficient than the previous ILP-based approaches for the problem. We also consider several combinations of parameters and provide a detailed landscape of the parameterized and kernelization complexity of the problem. We also study the version of the problem where we permit outliers, that is, where the chosen committee is required to satisfy a large number of voters (instead of all of them). In this context, we strengthen an APX-hardness result in the literature, and also show a simple but strong W-hardness result.", "Voting has been a very popular method for preference aggregation in multiagent environments. It is often the case that a set of agents with different preferences need to make a choice among a set of alternatives, where the alternatives could be various entities such as potential committee members, or joint plans of action. A standard methodology for this scenario is to have each agent express his preferences and then select an alternative according to some voting protocol. Several decision making applications in AI have followed this approach including problems in collaborative filtering [10] and planning [3, 4].", "We propose in this chapter a procedure for reaching agreement on multilateral treaties that produces a compromise as close as possible to the preferences of all parties. By “close” we mean that the maximum distance of the compromise from the position of any state is minimal, which we call a minimax outcome. We show that this procedure is relatively invulnerable to strategizing by states, reducing any incentive they might have to misrepresent their preferences to try to induce a better outcome.4", "We study multiple referenda and committee elections, when the ballot of each voter is simply a set of approved binary issues (or candidates). Two well-known rules under this model are the commonly used candidate-wise majority, also called the minisum rule, as well as the minimax rule. In the former, the elected committee consists of the candidates approved by a majority of voters, whereas the latter picks a committee minimizing the maximum Hamming distance to all ballots. As these rules are in some ways extreme points in the whole spectrum of solutions, we consider a general family of rules, using the Ordered Weighted Averaging (OWA) operators. Each rule is parameterized by a weight vector, showing the importance of the i-th highest Hamming distance of the outcome to the voters. The objective then is to minimize the weighted sum of the (ordered) distances. We study mostly computational, but also manipulability properties for this family. We first exhibit that for many rules, it is NP-hard to find a winning committee. We then proceed to identify cases where the problem is either efficiently solvable, or approximable with a small approximation factor. Finally, we investigate the issue of manipulating such rules and provide conditions that make this possible." ] }
1402.3044
2952061453
We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of @math items that maximize the total derived utility of all the agents (i.e., in our example we are to pick @math movies that we put on the plane's entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among the chosen, available, ones. We provide a formal specification of the model and provide concrete examples and settings where it is applicable. We show that the problem is hard in general, but we show a number of tractability results for its natural special cases.
Finally, the work of Elkind and Isma "ili @cite_30 is probably the closest one to ours. They study multiwinner elections and they use OWAs to define generalizations of the Chamberlin--Courant rule but, once again, they use OWAs to aggregate the utilities for a committee coming from different agents. The standard utilitarian Chamberlin--Courant rule sums up the scores that a committee gets from different voters, whereas the egalitarian variant considers the minimum score a committee receives. They generalize this idea by using an OWA operator, in effect obtaining a spectrum of rules between the utilitarian and the egalitarian variants. They obtain a number of complexity results, both in the general case and in specific cases corresponding to domain restrictions. For the same reason as in the preceding paragraphs, their results are incomparable to ours.
{ "cite_N": [ "@cite_30" ], "mid": [ "2293358884" ], "abstract": [ "Given a set of voters V, a set of candidates C, and voters' preferences over the candidates, multiwinner voting rules output a fixed-size subset of candidates committee. Under the Chamberlin---Courant multiwinner voting rule, one fixes a scoring vector of length |C|, and each voter's 'utility' for a given committee is defined to be the score that she assigns to her most preferred candidate in that committee; the goal is then to find a committee that maximizes the joint utility of all voters. The joint utility is typically identified either with the sum of all voters' utilities or with the utility of the least satisfied voter, resulting in, respectively, the utilitarian and the egalitarian variant of the Chamberlin---Courant's rule. For both of these cases, the problem of computing an optimal committee is NP-hard for general preferences, but becomes polynomial-time solvable if voters' preferences are single-peaked or single-crossing. In this paper, we propose a family of multiwinner voting rules that are based on the concept of ordered weighted average OWA and smoothly interpolate between the egalitarian and the utilitarian variants of the Chamberlin---Courant rule. We show that under moderate constraints on the weight vector we can recover many of the algorithmic results known for the egalitarian and the utilitarian version of Chamberlin---Courant's rule in this more general setting." ] }
1402.3044
2952061453
We consider the following problem: There is a set of items (e.g., movies) and a group of agents (e.g., passengers on a plane); each agent has some intrinsic utility for each of the items. Our goal is to pick a set of @math items that maximize the total derived utility of all the agents (i.e., in our example we are to pick @math movies that we put on the plane's entertainment system). However, the actual utility that an agent derives from a given item is only a fraction of its intrinsic one, and this fraction depends on how the agent ranks the item among the chosen, available, ones. We provide a formal specification of the model and provide concrete examples and settings where it is applicable. We show that the problem is hard in general, but we show a number of tractability results for its natural special cases.
Let us now move on to other related works and other related streams of research. Several known settings are recovered as particular cases of our general model. In particular, this applies to the case of the Chamberlin--Courant proportional representation rule @cite_2 , to the case of Proportional Approval Voting , and to (variants of) the budgeted social choice model @cite_25 @cite_23 @cite_42 . Computational complexity of the Chamberlin--Courant rule was first studied by , its parameterized complexity was analyzed by Betzler et al. , and the complexity under restricted domains was studied by Betzler et al. , @cite_8 , @cite_45 , and @cite_28 . The first approximation algorithm was proposed by Lu and Boutilier . The results on approximability were then extended in several directions by Skowron et al. . Proportional Approval Voting was studied computationally and axiomatically by @cite_35 @cite_37 and by Elkind and Lackner @cite_3 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_8", "@cite_28", "@cite_42", "@cite_3", "@cite_45", "@cite_23", "@cite_2", "@cite_25" ], "mid": [ "2951884201", "", "2110060186", "2949997781", "2206071134", "2949127983", "6335828", "2187589363", "2322270235", "1238745702" ], "abstract": [ "We study computational aspects of three prominent voting rules that use approval ballots to elect multiple winners. These rules are satisfaction approval voting, proportional approval voting, and reweighted approval voting. We first show that computing the winner for proportional approval voting is NP-hard, closing a long standing open problem. As none of the rules are strategyproof, even for dichotomous preferences, we study various strategic aspects of the rules. In particular, we examine the computational complexity of computing a best response for both a single agent and a group of agents. In many settings, we show that it is NP-hard for an agent or agents to compute how best to vote given a fixed set of approval ballots from the other agents.", "", "We study the complexity of winner determination in single-crossing elections under two classic fully proportional representation rules-Chamberlin-Courant's rule and Monroe's rule. Winner determination for these rules is known to be NP-hard for unrestricted preferences. We show that for single-crossing preferences this problem admits a polynomial-time algorithm for Chamberlin-Courant's rule, but remains NP-hard for Monroe's rule. Our algorithm for Chamberlin-Courant's rule can be modified to work for elections with bounded single-crossing width. We then consider elections that are both single-peaked and single-crossing, and develop an efficient algorithm for the egalitarian variant of Monroe's rule for such elections. While [3] have recently presented a polynomial-time algorithm for this rule under single-peaked preferences, our algorithm has considerably better worst-case running time than that of", "Demange (2012) generalized the classical single-crossing property to the intermediate property on median graphs and proved that the representative voter theorem still holds for this more general framework. We complement her result with proving that the linear orders of any profile which is intermediate on a median graph form a Condorcet domain. We prove that for any median graph there exists a profile that is intermediate with respect to that graph and that one may need at least as many alternatives as vertices to construct such a profile. We provide a polynomial-time algorithm to recognize whether or not a given profile is intermediate with respect to some median graph. Finally, we show that finding winners for the Chamberlin-Courant rule is polynomial-time solvable for profiles that are single-crossing on a tree.", "Data-driven analytics—in areas ranging from consumer marketing to public policy—often allow behavior prediction at the level of individuals rather than population segments, offering the opportunity to improve decisions that impact large populations. Modeling such (generalized) assignment problems as linear programs, we propose a general value-directed compression technique for solving such problems at scale. We dynamically segment the population into cells using a form of column generation, constructing groups of individuals who can provably be treated identically in the optimal solution. This compression allows problems, unsolvable using standard LP techniques, to be solved effectively. Indeed, once a compressed LP is constructed, problems can solved in milliseconds. We provide a theoretical analysis of the methods, outline the distributed implementation of the requisite data processing, and show how a single compressed LP can be used to solve multiple variants of the original LP near-optimally in real-time (e.g., to support scenario analysis). We also show how the method can be leveraged in integer programming models. Experimental results on marketing contact optimization and political legislature problems validate the performance of our technique.", "Many hard computational social choice problems are known to become tractable when voters' preferences belong to a restricted domain, such as those of single-peaked or single-crossing preferences. However, to date, all algorithmic results of this type have been obtained for the setting where each voter's preference list is a total order of candidates. The goal of this paper is to extend this line of research to the setting where voters' preferences are dichotomous, i.e., each voter approves a subset of candidates and disapproves the remaining candidates. We propose several analogues of the notions of single-peaked and single-crossing preferences for dichotomous profiles and investigate the relationships among them. We then demonstrate that for some of these notions the respective restricted domains admit efficient algorithms for computationally hard approval-based multi-winner rules.", "We study the complexity of electing a committee under several variants of the Chamberlin-Courant rule when the voters' preferences are single-peaked on a tree. We first show that this problem is easy for the egalitarian, or \"minimax\" version of this problem, for arbitrary trees and misrepresentation functions. For the standard (utilitarian) version of this problem we provide an algorithm for an arbitrary misrepresentation function whose running time is polynomial in the input size as long as the number of leaves of the underlying tree is bounded by a constant. On the other hand, we prove that our problem remains computationally hard on trees that have bounded degree, diameter, or pathwidth. Finally, we show how to modify Trick's [1989] algorithm to check whether an election is single-peaked on a tree whose number of leaves does not exceed a given parameter λ.", "We consider a classic social choice problem in an online setting. In each round, a decision maker observes a single agent's preferences over a set of m candidates, and must choose whether to irrevocably add a candidate to a selection set of limited cardinality k. Each agent's (positional) score depends on the candidates in the set when he arrives, and the decisionmaker's goal is to maximize average (over all agents) score. We prove that no algorithm (even randomized) can achieve an approximation factor better than O(log logm logm). In contrast, if the agents arrive in random order, we present a (1-1 e-o(1))- approximate algorithm, matching a lower bound for the offline problem. We show that improved performance is possible for natural input distributions or scoring rules. Finally, if the algorithm is permitted to revoke decisions at a fixed cost, we apply regret-minimization techniques to achieve approximation 1- 1 e-o(1) even for arbitrary inputs.", "The development of social choice theory over the past three decades has brought many new insights into democratic theory. Surprisingly, the theory of representation has gone almost untouched by social choice theorists. This article redresses this neglect and provides an axiomatic study of one means of implementing proportional representation. The distinguishing feature of proportional representation is its concern for the representativeness of deliberations as well as decisions. We define a representative in a way that is particularly attentive to this feature and then define a method of selecting representatives (a variant of the Borda rule) which selects a maximally representative body. We also prove that this method of selection meets four social choice axioms that are met by a number of other important social choicefunctions (including pairwise majority decision and the Borda rule). For over two hundred years, methods of selecting representative bodies have been a major topic of debate among democratic theorists. One important view of the goals and functions of repre", "We develop a general framework for social choice problems in which a limited number of alternatives can be recommended to an agent population. In our budgeted social choice model, this limit is determined by a budget, capturing problems that arise naturally in a variety of contexts, and spanning the continuum from pure consensus decision making (i.e., standard social choice) to fully personalized recommendation. Our approach applies a form of segmentation to social choice problems-- requiring the selection of diverse options tailored to different agent types--and generalizes certain multiwinner election schemes. We show that standard rank aggregation methods perform poorly, and that optimization in our model is NP-complete; but we develop fast greedy algorithms with some theoretical guarantees. Experiments on real-world datasets demonstrate the effectiveness of our algorithms." ] }
1402.2331
1712752320
Matrix Completion is the problem of recovering an unknown real-valued low-rank matrix from a subsample of its entries. Important recent results show that the problem can be solved efficiently under the assumption that the unknown matrix is incoherent and the subsample is drawn uniformly at random. Are these assumptions necessary? It is well known that Matrix Completion in its full generality is NP-hard. However, little is known if make additional assumptions such as incoherence and permit the algorithm to output a matrix of slightly higher rank. In this paper we prove that Matrix Completion remains computationally intractable even if the unknown matrix has rank @math but we are allowed to output any constant rank matrix, and even if additionally we assume that the unknown matrix is incoherent and are shown @math of the entries. This result relies on the conjectured hardness of the @math -Coloring problem. We also consider the positive semidefinite Matrix Completion problem. Here we show a similar hardness result under the standard assumption that @math Our results greatly narrow the gap between existing feasibility results and computational lower bounds. In particular, we believe that our results give the first complexity-theoretic justification for why distributional assumptions are needed beyond the incoherence assumption in order to obtain positive results. On the technical side, we contribute several new ideas on how to encode hard combinatorial problems in low-rank optimization problems. We hope that these techniques will be helpful in further understanding the computational limits of Matrix Completion and related problems.
There have been several hardness results for Matrix Completion over finite fields drawing on its connection to problems in coding theory. See, for example, the discussion in @cite_0 @cite_14 . The Matrix Completion problem over the reals seems to behave rather differently and techniques do not seem to transfer from the finite field case.
{ "cite_N": [ "@cite_0", "@cite_14" ], "mid": [ "2034815954", "1657130172" ], "abstract": [ "Given a matrix whose entries are a mixture of numeric values and symbolic variables, the matrix completion problem is to assign values to the variables so as to maximize the resulting matrix rank. This problem has deep connections to computational complexity and numerous important algorithmic applications. Determining the complexity of this problem is a fundamental open question in computational complexity. Under different settings of parameters, the problem is variously in P, in RP, or NP-hard. We shed new light on this landscape by demonstrating a new region of NP-hard scenarios. As a special case, we obtain the first known hardness result for matrices in which each variable appears only twice.Another particular scenario that we consider is the simultaneous matrix completion problem, where one must simultaneously maximize the rank for several matrices that share variables. This problem has important applications in the field of network coding. Recent work has given a simple, greedy, deterministic algorithm for this problem, assuming that the algorithm works over a sufficiently large field. We show an exact threshold for the field size required to find a simultaneous completion efficiently. This result implies that, surprisingly, the simple greedy algorithm is optimal: finding a simultaneous completion over any smaller field is NP-hard.", "This paper establishes information-theoretic limits for estimating a finite-field low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the low-rank matrix with random sensing matrices. Necessary and sufficient conditions on the number of measurements required are provided. It is shown that these conditions are sharp and the minimum-rank decoder is asymptotically optimal. The reliability function of this decoder is also derived by appealing to de Caen's lower bound on the probability of a union. The sufficient condition also holds when the sensing matrices are sparse-a scenario that may be amenable to efficient decoding. More precisely, it is shown that if the n × n-sensing matrices contain, on average, Ω(nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the field. Analogies are drawn between the aforementioned results and rank-metric codes in the coding theory literature. In fact, we are also strongly motivated by understanding when minimum rank distance decoding of random rank-metric codes succeeds. To this end, we derive minimum distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one." ] }
1402.1958
1873068757
The computational costs of inference and planning have confined Bayesian model-based reinforcement learning to one of two dismal fates: powerful Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian non-parametric models but using simple, myopic planning strategies such as Thompson sampling. We ask whether it is feasible and truly beneficial to combine rich probabilistic models with a closer approximation to fully Bayesian planning. First, we use a collection of counterexamples to show formal problems with the over-optimism inherent in Thompson sampling. Then we leverage state-of-the-art techniques in efficient Bayes-adaptive planning and non-parametric Bayesian methods to perform qualitatively better than both existing conventional algorithms and Thompson sampling on two contextual bandit-like problems.
Many researchers have considered powerful statistical models in the context of sequential decision-making @cite_26 @cite_8 , including in exploration-exploitation settings @cite_0 @cite_13 . Non-parametric models have been considered in the context of control before @cite_2 @cite_13 but with an emphasis on modeling the data rather than planning. In @cite_24 , the authors consider factored mdp s whose transitions are modeled using Bayesian Networks. They demonstrate the advantages of having an appropriate prior to capture the existing structure in the true dynamics, at least in a case in which the problems of safe exploration do not arise. For planning, they propose an online Monte-Carlo algorithm with an approximate sampling scheme, however the forward-search is conducted with a depth of 2 and a small branching factor, presumably limiting the benefits of Bayes-adaptivity.
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_0", "@cite_24", "@cite_2", "@cite_13" ], "mid": [ "2245825236", "2134197408", "2119678437", "2949600864", "2123372395", "2950270837" ], "abstract": [ "We consider the problem of learning to act in partially observable, continuous-state-and-action worlds where we have abstract prior knowledge about the structure of the optimal policy in the form of a distribution over policies. Using ideas from planning-as-inference reductions and Bayesian unsupervised learning, we cast Markov Chain Monte Carlo as a stochastic, hill-climbing policy search algorithm. Importantly, this algorithm's search bias is directly tied to the prior and its MCMC proposal kernels, which means we can draw on the full Bayesian toolbox to express the search bias, including nonparametric priors and structured, recursive processes like grammars over action sequences. Furthermore, we can reason about uncertainty in the search bias itself by constructing a hierarchical prior and reasoning about latent variables that determine the abstract structure of the policy. This yields an adaptive search algorithm--our algorithm learns to learn a structured policy efficiently. We show how inference over the latent variables in these policy priors enables intra- and intertask transfer of abstract knowledge. We demonstrate the flexibility of this approach by learning meta search biases, by constructing a nonparametric finite state controller to model memory, by discovering motor primitives using a simple grammar over primitive actions, and by combining all three.", "We consider the problem of multi-task reinforcement learning where the learner is provided with a set of tasks, for which only a small number of samples can be generated for any given policy. As the number of samples may not be enough to learn an accurate evaluation of the policy, it would be necessary to identify classes of tasks with similar structure and to learn them jointly. We consider the case where the tasks share structure in their value functions, and model this by assuming that the value functions are all sampled from a common prior. We adopt the Gaussian process temporal-difference value function model and use a hierarchical Bayesian approach to model the distribution over the value functions. We study two cases, where all the value functions belong to the same class and where they belong to an undefined number of classes. For each case, we present a hierarchical Bayesian model, and derive inference algorithms for (i) joint learning of the value functions, and (ii) efficient transfer of the information gained in (i) to assist learning the value function of a newly observed task.", "We consider reinforcement learning in partially observable domains where the agent can query an expert for demonstrations. Our nonparametric Bayesian approach combines model knowledge, inferred from expert information and independent exploration, with policy knowledge inferred from expert trajectories. We introduce priors that bias the agent towards models with both simple representations and simple policies, resulting in improved policy and model learning.", "Model-based Bayesian reinforcement learning has generated significant interest in the AI community as it provides an elegant solution to the optimal exploration-exploitation tradeoff in classical reinforcement learning. Unfortunately, the applicability of this type of approach has been limited to small domains due to the high complexity of reasoning about the joint posterior over model parameters. In this paper, we consider the use of factored representations combined with online planning techniques, to improve scalability of these methods. The main contribution of this paper is a Bayesian framework for learning the structure and parameters of a dynamical system, while also simultaneously planning a (near-)optimal sequence of actions.", "The Partially Observable Markov Decision Process (POMDP) framework has proven useful in planning domains where agents must balance actions that provide knowledge and actions that provide reward. Unfortunately, most POMDPs are complex structures with a large number of parameters. In many real-world problems, both the structure and the parameters are difficult to specify from domain knowledge alone. Recent work in Bayesian reinforcement learning has made headway in learning POMDP models; however, this work has largely focused on learning the parameters of the POMDP model. We define an infinite POMDP (iPOMDP) model that does not require knowledge of the size of the state space; instead, it assumes that the number of visited states will grow as the agent explores its world and only models visited states explicitly. We demonstrate the iPOMDP on several standard problems.", "We present a modular approach to reinforcement learning that uses a Bayesian representation of the uncertainty over models. The approach, BOSS (Best of Sampled Set), drives exploration by sampling multiple models from the posterior and selecting actions optimistically. It extends previous work by providing a rule for deciding when to resample and how to combine the models. We show that our algorithm achieves nearoptimal reward with high probability with a sample complexity that is low relative to the speed at which the posterior distribution converges during learning. We demonstrate that BOSS performs quite favorably compared to state-of-the-art reinforcement-learning approaches and illustrate its flexibility by pairing it with a non-parametric model that generalizes across states." ] }
1402.1958
1873068757
The computational costs of inference and planning have confined Bayesian model-based reinforcement learning to one of two dismal fates: powerful Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian non-parametric models but using simple, myopic planning strategies such as Thompson sampling. We ask whether it is feasible and truly beneficial to combine rich probabilistic models with a closer approximation to fully Bayesian planning. First, we use a collection of counterexamples to show formal problems with the over-optimism inherent in Thompson sampling. Then we leverage state-of-the-art techniques in efficient Bayes-adaptive planning and non-parametric Bayesian methods to perform qualitatively better than both existing conventional algorithms and Thompson sampling on two contextual bandit-like problems.
@cite_4 consider a particular form of safe exploration to deal with non-ergodic mdp s, but they do not address discounted objectives or structured models.
{ "cite_N": [ "@cite_4" ], "mid": [ "2952720101" ], "abstract": [ "In environments with uncertain dynamics exploration is necessary to learn how to perform well. Existing reinforcement learning algorithms provide strong exploration guarantees, but they tend to rely on an ergodicity assumption. The essence of ergodicity is that any state is eventually reachable from any other state by following a suitable policy. This assumption allows for exploration algorithms that operate by simply favoring states that have rarely been visited before. For most physical systems this assumption is impractical as the systems would break before any reasonable exploration has taken place, i.e., most physical systems don't satisfy the ergodicity assumption. In this paper we address the need for safe exploration methods in Markov decision processes. We first propose a general formulation of safety through ergodicity. We show that imposing safety by restricting attention to the resulting set of guaranteed safe policies is NP-hard. We then present an efficient algorithm for guaranteed safe, but potentially suboptimal, exploration. At the core is an optimization formulation in which the constraints restrict attention to a subset of the guaranteed safe policies and the objective favors exploration policies. Our framework is compatible with the majority of previously proposed exploration methods, which rely on an exploration bonus. Our experiments, which include a Martian terrain exploration problem, show that our method is able to explore better than classical exploration methods." ] }
1402.1958
1873068757
The computational costs of inference and planning have confined Bayesian model-based reinforcement learning to one of two dismal fates: powerful Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian non-parametric models but using simple, myopic planning strategies such as Thompson sampling. We ask whether it is feasible and truly beneficial to combine rich probabilistic models with a closer approximation to fully Bayesian planning. First, we use a collection of counterexamples to show formal problems with the over-optimism inherent in Thompson sampling. Then we leverage state-of-the-art techniques in efficient Bayes-adaptive planning and non-parametric Bayesian methods to perform qualitatively better than both existing conventional algorithms and Thompson sampling on two contextual bandit-like problems.
@cite_29 consider an infinite mdp , combining Bayes-adaptive planning with approximate inference over possible mdp s. However the class of models is quite specific to the particular domain they consider. In @cite_2 , an hierarchical Dirichlet Process is used to allow for an unbounded number of states in a pomdp and infer the size of the state space from data, this is referred as the i pomdp model. This model is used in a online forward-search planning scheme, albeit of rather limited depth and tested on modestly-sized problems.
{ "cite_N": [ "@cite_29", "@cite_2" ], "mid": [ "2157477959", "2123372395" ], "abstract": [ "Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty, trading off exploration and exploitation in an ideal way. Unfortunately, finding the resulting Bayes-optimal policies is notoriously taxing, since the search space becomes enormous. In this paper we introduce a tractable, sample-based method for approximate Bayes-optimal planning which exploits Monte-Carlo tree search. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems - because it avoids expensive applications of Bayes rule within the search tree by lazily sampling models from the current beliefs. We illustrate the advantages of our approach by showing it working in an infinite state space domain which is qualitatively out of reach of almost all previous work in Bayesian exploration.", "The Partially Observable Markov Decision Process (POMDP) framework has proven useful in planning domains where agents must balance actions that provide knowledge and actions that provide reward. Unfortunately, most POMDPs are complex structures with a large number of parameters. In many real-world problems, both the structure and the parameters are difficult to specify from domain knowledge alone. Recent work in Bayesian reinforcement learning has made headway in learning POMDP models; however, this work has largely focused on learning the parameters of the POMDP model. We define an infinite POMDP (iPOMDP) model that does not require knowledge of the size of the state space; instead, it assumes that the number of visited states will grow as the agent explores its world and only models visited states explicitly. We demonstrate the iPOMDP on several standard problems." ] }
1402.1958
1873068757
The computational costs of inference and planning have confined Bayesian model-based reinforcement learning to one of two dismal fates: powerful Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian non-parametric models but using simple, myopic planning strategies such as Thompson sampling. We ask whether it is feasible and truly beneficial to combine rich probabilistic models with a closer approximation to fully Bayesian planning. First, we use a collection of counterexamples to show formal problems with the over-optimism inherent in Thompson sampling. Then we leverage state-of-the-art techniques in efficient Bayes-adaptive planning and non-parametric Bayesian methods to perform qualitatively better than both existing conventional algorithms and Thompson sampling on two contextual bandit-like problems.
In @cite_15 , Gaussian Processes (GPs) are employed to infer models of the dynamics from limited data, with excellent empirical performance. However, the uncertainty that the GP captures was not explicitly used for exploration-exploitation-sensitive planning. This is addressed in @cite_21 , but with heuristic planning based on uncertainty reduction.
{ "cite_N": [ "@cite_15", "@cite_21" ], "mid": [ "2140135625", "2134540127" ], "abstract": [ "In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks.", "We present an implementation of model-based online reinforcement learning (RL) for continuous domains with deterministic transitions that is specifically designed to achieve low sample complexity. To achieve low sample complexity, since the environment is unknown, an agent must intelligently balance exploration and exploitation, and must be able to rapidly generalize from observations. While in the past a number of related sample efficient RL algorithms have been proposed, to allow theoretical analysis, mainly model-learners with weak generalization capabilities were considered. Here, we separate function approximation in the model learner (which does require samples) from the interpolation in the planner (which does not require samples). For model-learning we apply Gaussian processes regression (GP) which is able to automatically adjust itself to the complexity of the problem (via Bayesian hyperparameter selection) and, in practice, often able to learn a highly accurate model from very little data. In addition, a GP provides a natural way to determine the uncertainty of its predictions, which allows us to implement the \"optimism in the face of uncertainty\" principle used to efficiently control exploration. Our method is evaluated on four common benchmark domains." ] }
1402.1958
1873068757
The computational costs of inference and planning have confined Bayesian model-based reinforcement learning to one of two dismal fates: powerful Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian non-parametric models but using simple, myopic planning strategies such as Thompson sampling. We ask whether it is feasible and truly beneficial to combine rich probabilistic models with a closer approximation to fully Bayesian planning. First, we use a collection of counterexamples to show formal problems with the over-optimism inherent in Thompson sampling. Then we leverage state-of-the-art techniques in efficient Bayes-adaptive planning and non-parametric Bayesian methods to perform qualitatively better than both existing conventional algorithms and Thompson sampling on two contextual bandit-like problems.
More generally, our task is reminiscent of the case of active classification @cite_32 . But while active learning ultimately aims to find an accurate classifier on a labeling budget, we are concerned with a completely different metric, namely discounted return. In particular, a perfectly fine solution in our setting might be to avoid labeling a large part of the input space.
{ "cite_N": [ "@cite_32" ], "mid": [ "2056707879" ], "abstract": [ "We state and analyze the first active learning algorithm that finds an @e-optimal hypothesis in any hypothesis class, when the underlying distribution has arbitrary forms of noise. The algorithm, A^2 (for Agnostic Active), relies only upon the assumption that it has access to a stream of unlabeled examples drawn i.i.d. from a fixed distribution. We show that A^2 achieves an exponential improvement (i.e., requires only O([email protected]) samples to find an @e-optimal classifier) over the usual sample complexity of supervised learning, for several settings considered before in the realizable case. These include learning threshold classifiers and learning homogeneous linear separators with respect to an input distribution which is uniform over the unit sphere." ] }
1402.2166
2951131825
An element of a Coxeter group @math is fully commutative if any two of its reduced decompositions are related by a series of transpositions of adjacent commuting generators. These elements were extensively studied by Stembridge, in particular in the finite case. They index naturally a basis of the generalized Temperley--Lieb algebra. In this work we deal with any finite or affine Coxeter group @math , and we give explicit descriptions of fully commutative elements. Using our characterizations we then enumerate these elements according to their Coxeter length, and find in particular that the corrresponding growth sequence is ultimately periodic in each type. When the sequence is infinite, this implies that the associated Temperley--Lieb algebra has linear growth.
The second article @cite_24 of Hagiwara deals with . In general, minuscule heaps are a strict subset of FC heaps, but here they coincide, as the author shows in his Theorem 5.1. His characterization goes by embedding posets in a family of slanted lattices @math , and Hagiwara proves that FC heaps are precisely the finite convex subsets occurring in a lattice @math . It can be easily seen that this is a corollary of our work; the gradient @math defined in [p.17] HagiwaraAtilde can be seen in the path @math as the sum of the number of up steps and the number of horizontal @math steps.
{ "cite_N": [ "@cite_24" ], "mid": [ "19129141" ], "abstract": [ "A minuscule heap is a partially ordered set, together with a labeling of its ele- ments by the nodes of a Dynkin diagram, satisfying certain conditions derived by J. Stembridge. This paper classifies the minuscule heaps over the Dynkin diagram of type ˜ A." ] }
1402.2166
2951131825
An element of a Coxeter group @math is fully commutative if any two of its reduced decompositions are related by a series of transpositions of adjacent commuting generators. These elements were extensively studied by Stembridge, in particular in the finite case. They index naturally a basis of the generalized Temperley--Lieb algebra. In this work we deal with any finite or affine Coxeter group @math , and we give explicit descriptions of fully commutative elements. Using our characterizations we then enumerate these elements according to their Coxeter length, and find in particular that the corrresponding growth sequence is ultimately periodic in each type. When the sequence is infinite, this implies that the associated Temperley--Lieb algebra has linear growth.
The paper @cite_5 by Hanusa and Jones was already mentioned several times. Here the FC permutations are classified and counted by dividing them first into long and short ones. Long permutations are easily counted and have a pleasing generating function, while the enumeration of short ones requires several pages resulting in a rather complicated generating function. As mentioned before, we could confirm their conjecture about the precise beginning of the periodicity. One can see that we could manage this by considering all elements in our approach, without dividing them beforehand into adequate long'' and short'' ones.
{ "cite_N": [ "@cite_5" ], "mid": [ "2145230252" ], "abstract": [ "We give a generating function for the fully commutative affine permutations enumerated by rank and Coxeter length, extending formulas due to Stembridge and Barcucci-Del Lungo-Pergola-Pinzani. For fixed rank, the length generating functions have coefficients that are periodic with period dividing the rank. In the course of proving these formulas, we obtain results that elucidate the structure of the fully commutative affine permutations." ] }
1402.2166
2951131825
An element of a Coxeter group @math is fully commutative if any two of its reduced decompositions are related by a series of transpositions of adjacent commuting generators. These elements were extensively studied by Stembridge, in particular in the finite case. They index naturally a basis of the generalized Temperley--Lieb algebra. In this work we deal with any finite or affine Coxeter group @math , and we give explicit descriptions of fully commutative elements. Using our characterizations we then enumerate these elements according to their Coxeter length, and find in particular that the corrresponding growth sequence is ultimately periodic in each type. When the sequence is infinite, this implies that the associated Temperley--Lieb algebra has linear growth.
In the recent work @cite_35 by Al Harbat, the author classifies FC elements by indicating a normal form for each of them. That is, the main theorem exhibits a family of reduced FC expressions where each FC element is represented exactly once. We will not detail this here, but these normal forms correspond to a particular linear extension of FC heaps which is fairly easy to describe.
{ "cite_N": [ "@cite_35" ], "mid": [ "1668917366" ], "abstract": [ "We classify fully commutative elements in the affine Coxeter group of type @math . We give a normal form for such elements, then we propose an application of this normal form: we lift these fully commutative elements to the affine braid group of type @math and we get a form for \"fully commutative braids\"." ] }
1402.1732
2081497391
The dining cryptographers protocol implements a multiple access channel in whichsenders and recipients are anonymous. A problem is that a malicious participant can disrupt commu-nication by deliberately creating collisions. We propose a computationally secure dining cryptogra-phers protocol with collision resolution that achieves a maximum stable throughput of 0.924 messagesper round and which allows to easily detect disruptors.
Superposed receiving @cite_9 @cite_8 is a collision resolution technique for the dining cryptographers protocol that achieves throughput of 100 an additive group. When a collision occurs, the average of the messages values is computed and only messages whose value is less than this average are retransmitted. Like in SICTA, inference cancellation is used, which leads to the $100 the use of an additive finite group and it cannot be implemented using the algebraic ciphertexts that we need for efficient ciphertexts generation and for zero-knowledge proofs.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2065256113", "1601454324" ], "abstract": [ "In present day communication networks, the network operator or an intruder could easily observe when, how much and with whom the users communicate (traffic analysis), even if the users employ end-to-end encryption. With the increasing use of ISDNs, this becomes a severe threat.Therefore, we summarize basic concepts to keep the recipient and sender or at least their relationship unobservable, consider some possible implementations and necessary hierarchical extensions, and propose and evaluate some suitable performance and reliability enhancements.", "A protocol is described which allows to send and receive messages anonymously using an arbitrary communication network, and it is proved to be unconditionally secure." ] }
1402.1732
2081497391
The dining cryptographers protocol implements a multiple access channel in whichsenders and recipients are anonymous. A problem is that a malicious participant can disrupt commu-nication by deliberately creating collisions. We propose a computationally secure dining cryptogra-phers protocol with collision resolution that achieves a maximum stable throughput of 0.924 messagesper round and which allows to easily detect disruptors.
A fully verifiable dining cryptographers protocol was proposed in @cite_0 and rediscovered in @cite_4 . In this protocol, we have 100 which can be lengthy and cumbersome. Current systems are using mixnets to perform the reservations and therefore they are inefficient when only a few reservations are made. Further, they do not easily adapt to situations where participants join or leave frequently.
{ "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "2052267638", "2200869402" ], "abstract": [ "We argue that the random oracle model—where all parties have access to a public random oracle—provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we suggest, a practical protocol P is produced by first devising and proving correct a protocol P R for the random oracle model, and then replacing oracle accesses by the computation of an “appropriately chosen” function h . This paradigm yields protocols much more efficient than standard ones while retaining many of the advantages of provable security. We illustrate these gains for problems including encryption, signatures, and zero-knowledge proofs.", "Among anonymity systems, DC-nets have long held attraction for their resistance to traffic analysis attacks, but practical implementations remain vulnerable to internal disruption or \"jamming\" attacks, which require time-consuming detection procedures to resolve. We present Verdict, the first practical anonymous group communication system built using proactively verifiable DC-nets: participants use public-key cryptography to construct DC-net ciphertexts, and use zero-knowledge proofs of knowledge to detect and exclude misbehavior before disruption. We compare three alternative constructions for verifiable DC-nets: one using bilinear maps and two based on simpler ElGamal encryption. While verifiable DC-nets incur higher computational overheads due to the public-key cryptography involved, our experiments suggest that Verdict is practical for anonymous group messaging or microblogging applications, supporting groups of 100 clients at 1 second per round or 1000 clients at 10 seconds per round. Furthermore, we show how existing symmetric-key DC-nets can \"fall back\" to a verifiable DC-net to quickly identify misbehavior, speeding up previous detections schemes by two orders of magnitude." ] }
1402.2175
2953008500
Let @math be a property of function @math for a fixed prime @math . An algorithm is called a tester for @math if, given a query access to the input function @math , with high probability, it accepts when @math satisfies @math and rejects when @math is "far" from satisfying @math . In this paper, we give a characterization of affine-invariant properties that are (two-sided error) testable with a constant number of queries. The characterization is stated in terms of decomposition theorems, which roughly claim that any function can be decomposed into a structured part that is a function of a constant number of polynomials, and a pseudo-random part whose Gowers norm is small. We first give an algorithm that tests whether the structured part of the input function has a specific form. Then we show that an affine-invariant property is testable with a constant number of queries if and only if it can be reduced to the problem of testing whether the structured part of the input function is close to one of a constant number of candidates.
Finally, we mention that a characterization of locally testable properties is known in a very different setting. In the , we are given an instance of a CSP and a query access to an assignment for the instance, and we want to test whether the assignment is a satisfying assignment or far from being so. Depending on the constraints we are allowed to use, CSPs can express many different problems and the query complexity to test drastically changes from constant to linear (in the number of variables). Recently, Bhattacharyya and Yoshida @cite_3 completely classified Boolean constraints in terms of query complexity.
{ "cite_N": [ "@cite_3" ], "mid": [ "22128777" ], "abstract": [ "Given an instance @math of a CSP, a tester for @math distinguishes assignments satisfying @math from those which are far from any assignment satisfying @math . The efficiency of a tester is measured by its query complexity, the number of variable assignments queried by the algorithm. In this paper, we characterize the hardness of testing Boolean CSPs in terms of the algebra generated by the relations used to form constraints. In terms of computational complexity, we show that if a non-trivial Boolean CSP is sublinear-query testable (resp., not sublinear-query testable), then the CSP is in NL (resp., P-complete, ⊕L-complete or NL-complete) and that if a sublinear-query testable Boolean CSP is constant-query testable (resp., not constant-query testable), then counting the number of solutions of the CSP is in P (resp., @math P-complete). Also, we conjecture that a CSP instance is testable in sublinear time if its Gaifman graph has bounded treewidth. We confirm the conjecture when a near-unanimity operation is a polymorphism of the CSP." ] }
1402.2107
2950288336
This work introduces a new task preemption primitive for Hadoop, that allows tasks to be suspended and resumed exploiting existing memory management mechanisms readily available in modern operating systems. Our technique fills the gap that exists between the two extremes cases of killing tasks (which waste work) or waiting for their completion (which introduces latency): experimental results indicate superior performance and very small overheads when compared to existing alternatives.
Currently, two preemption strategies are available for Hadoop. One technique is to wait for tasks that should be preempted to complete: this is done using the strategy. Another approach is to kill tasks, using the primitive. Clearly, the first policy has the shortcoming of introducing large latencies for high-priority tasks, while the second one wastes work done by killed tasks. We refer to the work by Cheng al @cite_14 for an approach that strives to mitigate the impact of the strategy by adopting an appropriate eviction policy ( , choosing which tasks to kill). In , we compare our new preemption primitive to and .
{ "cite_N": [ "@cite_14" ], "mid": [ "1516820013" ], "abstract": [ "Modern production clusters are often shared by multiple types of jobs with different priorities in order to improve resource utilization. Preemption is a common technique employed by MapReduce schedulers to avoid delaying production jobs while allowing the cluster to be shared by other non-production jobs. In addition, it also prevents a large job from occupying too many resources and starving others. Recent literature shows that jobs in production MapReduce clusters have a mixture of lengths and sizes spanning many orders of magnitude. In this type of environments, the current preemption policy used by MapReduce schedulers can significantly delay the completion time of long running tasks, resulting in waste of resources. This paper firstly discusses the heterogeneous nature of MapReduce jobs and their arrival rates in several production clusters. Secondly, we characterize the situations where the current preemption policy causes significant preemption penalty. We then propose a simple mechanism that works in conjunction with existing job schedulers to address this problem. Finally, we evaluate our solution under various types of workloads in Amazon EC2. Experiments show our method can improve system normalized performance by 15 during busy periods by effectively avoiding unnecessary preemption while preserving fairness." ] }
1402.1674
2950510523
According to the recent rulings of the Federal Communications Commission (FCC), TV white spaces (TVWS) can now be accessed by secondary users (SUs) after a list of vacant TV channels is obtained via a geo-location database. Proper business models are therefore essential for database operators to manage geo-location databases. Database access can be simultaneously priced under two different schemes: the registration scheme and the service plan scheme. In the registration scheme, the database reserves part of the TV bandwidth for registered White Space Devices (WSDs). In the service plan scheme, the WSDs are charged according to their queries. In this paper, we investigate the business model for the TVWS database under a hybrid pricing scheme. We consider the scenario where a database operator employs both the registration scheme and the service plan scheme to serve the SUs. The SUs' choices of different pricing schemes are modeled as a non-cooperative game and we derive distributed algorithms to achieve Nash Equilibrium (NE). Considering the NE of the SUs, the database operator optimally determines pricing parameters for both pricing schemes in terms of bandwidth reservation, registration fee and query plans.
Most existing works on geo-location databases can be classified into two categories. Some works focus on the design of geo-location database to protect primary users. In @cite_6 , discussed the methods to calculate the protection area for TV stations. In @cite_2 , designed a database-driven white space network based on measurement studies and terrain data. Some other works focused on the networking issue with the assumption that the database is already set up. In @cite_1 , presented a white space system utilizing a database. In @cite_12 , considered the channel selection and access point association problem. One recent work @cite_9 also address the business model related to the geo-location database. In @cite_9 , the authors proposed that the geo-location database acts as a spectrum broker reserving the spectrum from spectrum licensees. They considered only one pricing scheme which is similar to the registration scheme discussed in our paper. Compared to our previous work @cite_20 , in this paper, we further extend the scenario to non-strategic SUs and compared the pricing schemes with non-strategic and strategic SUs under complete information scenario. We also extend our theoretical analysis and numerical evaluations.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_20", "@cite_12" ], "mid": [ "2028201430", "", "2133039196", "2171031315", "1985606812", "2058560404" ], "abstract": [ "Geo-location database driven white space network is a very promising approach for improving secondary spectrum utilization. In this paper, we consider the business modeling for geo-location database driven white space network. In our proposed model, the database acts as a spectrum broker buying (reserving) bandwidth from spectrum licensees in advance, and then resells the reserved bandwidth to unlicensed white space devices (WSDs) in real-time. We study the optimal bandwidth reservation for the database with WSDs' demand uncertainty under both information symmetry and asymmetry. Under information symmetry, the database and the WSD experience the same degree of uncertainty about the market demand. We derive the optimal bandwidth reservations in a centralized integrated manner (as a benchmark). Under information asymmetry, the WSD has more information (i.e., with less uncertainty) about demand (due to the proximity to end-users). We propose a contract-based bandwidth reservation mechanism, which ensures WSDs share their local information with the database credibly. We further characterize the optimal bandwidth reservation contract systematically. Simulations show that under information asymmetry, the optimal bandwidth reservation contract improves both the database's profit and the social welfare significantly (larger than 30 in our simulations) without sacrificing the WSDs' benefits, comparing to those mechanisms without information sharing.", "", "The opening of the television bands in the United States presents an exciting opportunity for secondary spectrum utilization. Protecting licensed broadcast television viewers from harmful interference due to secondary spectrum usage is critical to the successful deployment of TV white space devices. A wide variety of secondary system operating scenarios must be considered in any potential interference analysis, as described below. Several different types of licensed television transmitters currently exist in the TV bands, along with secondary licensed services, such as wireless microphones. All licensed services must be adequately protected from harmful interference, which can readily and reliably be achieved with the described geo-location database methods. Specific implementation details of geo-location databases are discussed, including several complexity reduction techniques. Geo-location database techniques are also shown to more efficiently utilize available spectrum than other spectrum access techniques.", "The most recent FCC ruling proposes relying on a database of incumbents as the primary means of determining white space availability at any white spaces device (WSD). While the ruling provides broad guidelines for the database, the specifics of its design, features, implementation, and use are yet to be determined. Furthermore, architecting a network where all WSDs rely on the database raises several systems and networking challenges that have remained unexplored. Also, the ruling treats the database only as a storehouse for incumbents. We believe that the mandated use of the database has an additional opportunity: a means to dynamically manage the RF spectrum. Motivated by this opportunity, in this paper we present SenseLess, a database driven white spaces network. As suggested by its very name, in SenseLess, WSDs obviate the need to sense the spectrum by relying entirely on a database service to determine white spaces availability. The service, using a combination of an up-to-date database of incumbents, sophisticated signal propagation modeling, and an efficient content dissemination mechanism ensures efficient, scalable, and safe white space network operation. We build, deploy, and evaluate SenseLess and compare our results to ground truth spectrum measurements. We present the unique system design considerations that arise due to operating over the white spaces. We also evaluate its efficiency and scalability. To the best of our knowledge, this is the first paper that identifies and examines the systems and networking challenges that arise from operating a white space network, which is solely dependent on a channel occupancy database.", "According to the recent rulings of the Federal Communications Commission (FCC), TV white spaces (TVWS) can now be accessed by secondary users (SUs) after a list of vacant TV channels is obtained via a geo-location database. Proper business models are essential for database operators to manage the cost of maintaining geo-location databases. Database access can be simultaneously priced under two different schemes: the registration scheme and the service plan scheme. In the registration scheme, the database reserves part of the TV bandwidth for registered White Space Devices (WSD) in a soft-license way. In the service plan scheme, WSDs are charged according to their queries. In this paper, we investigate the business model for the TVWS database under a hybrid pricing scheme. We consider the scenario where a database operator employs both the registration scheme and the service plan scheme to serve the SUs. The SUs' choices of different pricing schemes are modeled as a non-cooperative game and we derive distributed algorithms to achieve the Nash Equilibrium (NE). Considering the NE of the SUs, the database operator optimally determines the pricing parameters for both pricing schemes in terms of bandwidth reservation, registration fee and query plans.", "According to FCC's ruling for white-space spectrum access, white-space devices are required to query a database to determine the spectrum availability. In this paper, we adopt a game theoretic approach for the database-assisted white-space access point (AP) network design. We first model the channel selection problem among the APs as a distributed AP channel selection game, and design a distributed AP channel selection algorithm that achieves a Nash equilibrium. We then propose a state-based game formulation for the distributed AP association problem of the secondary users by taking the cost of mobility into account. We show that the state-based distributed AP association game has the finite improvement property, and design a distributed AP association algorithm can converge to a state-based Nash equilibrium. Numerical results show that the algorithm is robust to the perturbation by secondary users' dynamical leaving and entering the system." ] }
1402.1674
2950510523
According to the recent rulings of the Federal Communications Commission (FCC), TV white spaces (TVWS) can now be accessed by secondary users (SUs) after a list of vacant TV channels is obtained via a geo-location database. Proper business models are therefore essential for database operators to manage geo-location databases. Database access can be simultaneously priced under two different schemes: the registration scheme and the service plan scheme. In the registration scheme, the database reserves part of the TV bandwidth for registered White Space Devices (WSDs). In the service plan scheme, the WSDs are charged according to their queries. In this paper, we investigate the business model for the TVWS database under a hybrid pricing scheme. We consider the scenario where a database operator employs both the registration scheme and the service plan scheme to serve the SUs. The SUs' choices of different pricing schemes are modeled as a non-cooperative game and we derive distributed algorithms to achieve Nash Equilibrium (NE). Considering the NE of the SUs, the database operator optimally determines pricing parameters for both pricing schemes in terms of bandwidth reservation, registration fee and query plans.
Many works also focus on the economic issue of dynamic spectrum sharing. In @cite_16 , the pricing-based spectrum access control is investigated under secondary users competitions. In @cite_10 , spectrum pricing with spatial reuse is considered. Contract theory is utilized in the scenarios where the spectrum buyers have hidden information. In @cite_19 , leveraged contract theory to analyze the spectrum trading between primary operator and SUs. In @cite_22 , contract theory is applied to the cooperative communication scenario. In this paper, we also model the service plan design with contract theory. However, due to the co-existence of hybrid pricing schemes, there is uncertainty about the number of SUs choosing the contract items, which is different from existing works.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_10", "@cite_22" ], "mid": [ "2126643042", "", "2098012117", "2159035129" ], "abstract": [ "Cognitive radio is a promising paradigm to achieve efficient utilization of spectrum resource by allowing the unlicensed users (i.e., secondary users, SUs) to access the licensed spectrum. Market-driven spectrum trading is an efficient way to achieve dynamic spectrum accessing sharing. In this paper, we consider the problem of spectrum trading with single primary spectrum owner (or primary user, PO) selling his idle spectrum to multiple SUs. We model the trading process as a monopoly market, in which the PO acts as monopolist who sets the qualities and prices for the spectrum he sells, and the SUs act as consumers who choose the spectrum with appropriate quality and price for purchasing. We design a monopolist-dominated quality-price contract, which is offered by the PO and contains a set of quality-price combinations each intended for a consumer type. A contract is feasible if it is incentive compatible (IC) and individually rational (IR) for each SU to purchase the spectrum with the quality-price intended for his type. We propose the necessary and sufficient conditions for the contract to be feasible. We further derive the optimal contract, which is feasible and maximizes the utility of the PO, for both discrete-consumer-type model and continuous-consumer-type model. Moreover, we analyze the social surplus, i.e., the aggregate utility of both PO and SUs, and we find that, depending on the distribution of consumer types, the social surplus under the optimal contract may be less than or close to the maximum social surplus.", "", "In Cognitive Radio Networks (CRN), there are multiple primary and secondary users in a region, and primaries can lease out their unused bandwidth to secondaries in exchange for a fee. This gives rise to price competition among the primaries, wherein each primary tries to attract secondaries by setting a lower price for its bandwidth than the other primaries. Radio spectrum has the distinctive feature that transmissions at neighboring locations on the same channel interfere with each other, whereas the same channel can be used at far-off locations without mutual interference. So in the above price competition scenario in a CRN, each primary must jointly select a set of mutually non-interfering locations within the region (which corresponds to an independent set in the conflict graph representing the region) at which to offer bandwidth and the price at each location. In this paper, we analyze this price competition scenario as a game and seek a Nash Equilibrium (NE). We identify a class of conflict graphs, which we refer to as mean valid graphs, such that the conflict graphs of a large number of topologies that commonly arise in practice are mean valid. We explicitly compute a symmetric NE in mean valid graphs and show that it is unique.", "Providing proper economic incentives is essential for the success of dynamic spectrum sharing. Cooperative spectrum sharing is one effective way to achieve this goal. In cooperative spectrum sharing, secondary users (SUs) relay traffics for primary users (PUs), in exchange for dedicated transmission time for the SUs' own communication needs. In this paper, we study the cooperative spectrum sharing under incomplete information, where SUs' types (which capture the relay channel gains and the SUs' power costs) are private information and are not known to the PU. Inspired by the contract theory, we model the network as a labor market. The PU is an employer who offers a contract to the SUs. The contract consists of a set of items representing combinations of spectrum access time (i.e., reward) and relay power (i.e., contribution). The SUs are employees, and each of them selects the best contract item to maximize its payoff. We study the optimal contract design for both weakly and strongly incomplete information scenarios. First, we provide necessary and sufficient conditions for feasible contracts in both scenarios. In the weakly incomplete information scenario, we further derive the optimal contract that achieves the same maximum PU's utility as in the complete information benchmark. In the strongly incomplete information scenario, we propose a Decompose-and-Compare algorithm that achieves a close-to-optimal contract. We further show that the PU's expected utility loss due to the suboptimal algorithm and the strongly incomplete information are both relatively small (less than 2 and 1.3 , respectively, in our numerical results with two SU types)." ] }
1402.1674
2950510523
According to the recent rulings of the Federal Communications Commission (FCC), TV white spaces (TVWS) can now be accessed by secondary users (SUs) after a list of vacant TV channels is obtained via a geo-location database. Proper business models are therefore essential for database operators to manage geo-location databases. Database access can be simultaneously priced under two different schemes: the registration scheme and the service plan scheme. In the registration scheme, the database reserves part of the TV bandwidth for registered White Space Devices (WSDs). In the service plan scheme, the WSDs are charged according to their queries. In this paper, we investigate the business model for the TVWS database under a hybrid pricing scheme. We consider the scenario where a database operator employs both the registration scheme and the service plan scheme to serve the SUs. The SUs' choices of different pricing schemes are modeled as a non-cooperative game and we derive distributed algorithms to achieve Nash Equilibrium (NE). Considering the NE of the SUs, the database operator optimally determines pricing parameters for both pricing schemes in terms of bandwidth reservation, registration fee and query plans.
There are some works focus on the hybrid pricing of other limited resources. In @cite_23 . Wang et.al study the problem of capacity segmentation for two different pricing schemes for cloud service providers. One key difference between our work and @cite_23 is that the strategic SUs considered in our paper can dynamically choose between pricing schemes. While in @cite_23 , the users are pre-categorized into different pricing scheme before designing the pricing schemes.
{ "cite_N": [ "@cite_23" ], "mid": [ "2065752649" ], "abstract": [ "Cloud resources are usually priced in multiple markets with different service guarantees. For example, Amazon EC2 prices virtual instances under three pricing schemes -- the subscription option (a.k.a., Reserved Instances), the pay-as-you-go offer (a.k.a., On-Demand Instances), and an auction-like spot market (a.k.a., Spot Instances) -- simultaneously. There arises a new problem of capacity segmentation: how can a provider allocate resources to different categories of pricing schemes, so that the total revenue is maximized? In this paper, we consider an EC2-like pricing scheme with traditional pay-as-you-go pricing augmented by an auction market, where bidders periodically bid for resources and can use the instances for as long as they wish, until the clearing price exceeds their bids. We show that optimal periodic auctions must follow the design of m+1-price auction with seller's reservation price. Theoretical analysis also suggests the connections between periodic auctions and EC2 spot market. Furthermore, we formulate the optimal capacity segmentation strategy as a Markov decision process over some demand prediction window. To mitigate the high computational complexity of the conventional dynamic programming solution, we develop a near-optimal solution that has significantly lower complexity and is shown to asymptotically approach the optimal revenue." ] }
1402.1515
2070712430
In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements. This formulation is relevant in Big Data scenarios where large dictionary models may be spread over different spatial locations and it is not feasible to aggregate all dictionaries in one location due to communication and privacy considerations. We first show that the dual function of the inference problem is an aggregation of individual cost functions associated with different agents, which can then be minimized efficiently by means of diffusion strategies. The collaborative inference step generates dual variables that are used by the agents to update their dictionaries without the need to share these dictionaries or even the coefficient models for the training data. This is a powerful property that leads to an effective distributed procedure for learning dictionaries over large networks (e.g., hundreds of agents in our experiments). Furthermore, the proposed learning strategy operates in an online manner and is able to respond to streaming data, where each data sample is presented to the network once.
Dictionary learning is a useful procedure by which dependencies among input features can be represented in terms of suitable bases @cite_37 @cite_16 @cite_21 @cite_0 @cite_32 @cite_18 @cite_20 @cite_2 @cite_3 @cite_25 . It has found applications in many machine learning and inference tasks including image denoising @cite_0 @cite_32 , dimensionality-reduction @cite_18 @cite_20 , bi-clustering @cite_2 , feature-extraction and classification @cite_3 , and novel document detection @cite_25 . Dictionary learning usually alternates between two steps: (i) an inference (sparse coding) step and (ii) a dictionary update step. The first step finds a sparse representation for the input data using the existing dictionary by solving, for example, a regularized regression problem, while the second step usually employs a gradient descent iteration to update the dictionary entries.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_21", "@cite_32", "@cite_3", "@cite_0", "@cite_2", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2160547390", "1975900269", "", "2112447569", "2128638419", "2153663612", "1992918752", "", "2117242598", "2044809283" ], "abstract": [ "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data", "Principal component analysis (PCA) is widely used in data processing and dimensionality reduction. However, PCA suffers from the fact that each principal component is a linear combination of all the original variables, thus it is often difficult to interpret the results. We introduce a new method called sparse principal component analysis (SPCA) using the lasso (elastic net) to produce modified principal components with sparse loadings. We first show that PCA can be formulated as a regression-type optimization problem; sparse loadings are then obtained by imposing the lasso (elastic net) constraint on the regression coefficients. Efficient algorithms are proposed to fit our SPCA models for both regular multivariate data and gene expression arrays. We also give a new formula to compute the total variance of modified principal components. As illustrations, SPCA is applied to real and simulated data with encouraging results.", "", "Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set in order to adapt it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large data sets.", "It is now well established that sparse signal models are well suited for restoration tasks and can be effectively learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and discriminative class models. The linear version of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.", "We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods", "Summary. Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row–column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.", "", "Given their pervasive use, social media, such as Twitter, have become a leading source of breaking news. A key task in the automated identification of such news is the detection of novel documents from a voluminous stream of text documents in a scalable manner. Motivated by this challenge, we introduce the problem of online l1-dictionary learning where unlike traditional dictionary learning, which uses squared loss, the l1-penalty is used for measuring the reconstruction error. We present an efficient online algorithm for this problem based on alternating directions method of multipliers, and establish a sublinear regret bound for this algorithm. Empirical results on news-stream and Twitter data, shows that this online l1-dictionary learning algorithm for novel document detection gives more than an order of magnitude speedup over the previously known batch algorithm, without any significant loss in quality of results.", "Principal component analysis (PCA) is a widely used tool for data analysis and dimension reduction in applications throughout science and engineering. However, the principal components (PCs) can sometimes be difficult to interpret, because they are linear combinations of all the original variables. To facilitate interpretation, sparse PCA produces modified PCs with sparse loadings, i.e. loadings with very few non-zero elements. In this paper, we propose a new sparse PCA method, namely sparse PCA via regularized SVD (sPCA-rSVD). We use the connection of PCA with singular value decomposition (SVD) of the data matrix and extract the PCs through solving a low rank matrix approximation problem. Regularization penalties are introduced to the corresponding minimization problem to promote sparsity in PC loadings. An efficient iterative algorithm is proposed for computation. Two tuning parameter selection methods are discussed. Some theoretical results are established to justify the use of sPCA-rSVD when only the data covariance matrix is available. In addition, we give a modified definition of variance explained by the sparse PCs. The sPCA-rSVD provides a uniform treatment of both classical multivariate data and high-dimension-low-sample-size (HDLSS) data. Further understanding of sPCA-rSVD and some existing alternatives is gained through simulation studies and real data examples, which suggests that sPCA-rSVD provides competitive results." ] }
1402.1515
2070712430
In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements. This formulation is relevant in Big Data scenarios where large dictionary models may be spread over different spatial locations and it is not feasible to aggregate all dictionaries in one location due to communication and privacy considerations. We first show that the dual function of the inference problem is an aggregation of individual cost functions associated with different agents, which can then be minimized efficiently by means of diffusion strategies. The collaborative inference step generates dual variables that are used by the agents to update their dictionaries without the need to share these dictionaries or even the coefficient models for the training data. This is a powerful property that leads to an effective distributed procedure for learning dictionaries over large networks (e.g., hundreds of agents in our experiments). Furthermore, the proposed learning strategy operates in an online manner and is able to respond to streaming data, where each data sample is presented to the network once.
With the increasing complexity of various learning tasks, it is not uncommon for the size of the learning dictionaries to be demanding in terms of memory and computing requirements. It is therefore important to study scenarios where the dictionary is not necessarily available in a single central location but its components are possibly spread out over multiple locations. This is particularly true in Big Data scenarios where large dictionary components may already be available at separate locations and it is not feasible to aggregate all dictionaries in one location due to communication and privacy considerations. This observation motivates us to examine how to learn a dictionary model that is stored over a network of agents, where each agent is in charge of only a portion of the dictionary elements. Compared with other works, the problem we solve in this article is how to learn a distributed dictionary model, which is, for example, different from the useful work in @cite_36 where it is assumed instead that each agent maintains the dictionary model.
{ "cite_N": [ "@cite_36" ], "mid": [ "2078899824" ], "abstract": [ "We consider the problem of distributed dictionary learning, where a set of nodes is required to collectively learn a common dictionary from noisy measurements. This approach may be useful in several contexts including sensor networks. Diffusion cooperation schemes have been proposed to solve the distributed linear regression problem. In this work we focus on a diffusion-based adaptive dictionary learning strategy: each node records independent observations and cooperates with its neighbors by sharing its local dictionary. The resulting algorithm corresponds to a distributed alternate optimization. Beyond dictionary learning, this strategy could be adapted to many matrix factorization problems in various settings. We illustrate its efficiency on some numerical experiments." ] }
1402.1515
2070712430
In this paper, we consider learning dictionary models over a network of agents, where each agent is only in charge of a portion of the dictionary elements. This formulation is relevant in Big Data scenarios where large dictionary models may be spread over different spatial locations and it is not feasible to aggregate all dictionaries in one location due to communication and privacy considerations. We first show that the dual function of the inference problem is an aggregation of individual cost functions associated with different agents, which can then be minimized efficiently by means of diffusion strategies. The collaborative inference step generates dual variables that are used by the agents to update their dictionaries without the need to share these dictionaries or even the coefficient models for the training data. This is a powerful property that leads to an effective distributed procedure for learning dictionaries over large networks (e.g., hundreds of agents in our experiments). Furthermore, the proposed learning strategy operates in an online manner and is able to respond to streaming data, where each data sample is presented to the network once.
In this paper, we first formulate a general dictionary learning problem, where the residual error function and the regularization function can assume different forms in different applications. As we shall explain, this form turns out not to be directly amenable to distributed implementations. However, when the regularization is strongly convex, we will show that the problem has a dual function that can be solved in a distributed manner using diffusion strategies @cite_11 @cite_38 @cite_7 @cite_5 . In this solution, the agents will not need to share their (private) dictionary elements but only the dual variable. Useful consensus strategies @cite_1 @cite_34 @cite_13 @cite_52 can also be used for the same purpose. However, since it has been shown that diffusion strategies have enhanced stability and learning abilities over consensus strategies @cite_12 @cite_14 @cite_28 , we will continue our presentation by focusing on diffusion strategies.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_7", "@cite_28", "@cite_1", "@cite_52", "@cite_5", "@cite_34", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2042664989", "2012445782", "2056182476", "2024307985", "2108306501", "2154834860", "2086502731", "2092507976", "1603765807", "", "2121820607" ], "abstract": [ "Nature provides splendid examples of real-time learning and adaptation behavior that emerges from highly localized interactions among agents of limited capabilities. For example, schools of fish are remarkably apt at configuring their topologies almost instantly in the face of danger [1]: when a predator arrives, the entire school opens up to let the predator through and then coalesces again into a moving body to continue its schooling behavior. Likewise, in bee swarms, only a small fraction of the agents (about 5 ) are informed, and these informed agents are able to guide the entire swarm of bees to their new hive [2]. It is an extraordinary property of biological networks that sophisticated behavior is able to emerge from simple interactions among lower-level agents [3].", "This work deals with the topic of information processing over graphs. The presentation is largely self-contained and covers results that relate to the analysis and design of multi-agent networks for the distributed solution of optimization, adaptation, and learning problems from streaming data through localized interactions among agents. The results derived in this work are useful in comparing network topologies against each other, and in comparing networked solutions against centralized or batch implementations. There are many good reasons for the peaked interest in distributed implementations, especially in this day and age when the word \"network\" has become commonplace whether one is referring to social networks, power networks, transportation networks, biological networks, or other types of networks. Some of these reasons have to do with the benefits of cooperation in terms of improved performance and improved resilience to failure. Other reasons deal with privacy and secrecy considerations where agents may not be comfortable sharing their data with remote fusion centers. In other situations, the data may already be available in dispersed locations, as happens with cloud computing. One may also be interested in learning through data mining from big data sets. Motivated by these considerations, this work examines the limits of performance of distributed solutions and discusses procedures that help bring forth their potential more fully. The presentation adopts a useful statistical framework and derives performance results that elucidate the mean-square stability, convergence, and steady-state behavior of the learning networks. At the same time, the work illustrates how distributed processing over graphs gives rise to some revealing phenomena due to the coupling effect among the agents. These phenomena are discussed in the context of adaptive networks, along with examples from a variety of areas including distributed sensing, intrusion detection, distributed estimation, online adaptation, network system theory, and machine learning.", "Motivated by recent developments in the context of adaptation over networks, this work establishes useful results about the limiting global behavior of diffusion and consensus strategies for the solution of distributed optimization problems. It is known that the choice of combination policies has a direct bearing on the convergence and performance of distribued solutions. This article reveals what aspects of the combination policies determine the nature of the Pareto-optimal solution and how close the distributed solution gets to it. The results suggest useful constructive procedures to control the convergence behavior of distributed strategies and to design effective combination procedures.", "Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies. The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings.", "The paper studies distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy intersensor communication. It introduces separably estimable observation models that generalize the observability condition in linear centralized estimation to nonlinear distributed estimation. It studies two distributed estimation algorithms in separably estimable models, the NU (with its linear counterpart LU) and the NLU. Their update rule combines a consensus step (where each sensor updates the state by weight averaging it with its neighbors' states) and an innovation step (where each sensor processes its local current observation). This makes the three algorithms of the consensus + innovations type, very different from traditional consensus. This paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value), efficiency, and asymptotic unbiasedness. For LU and NU, it proves asymptotic normality and provides convergence rate guarantees. The three algorithms are characterized by appropriately chosen decaying weight sequences. Algorithms LU and NU are analyzed in the framework of stochastic approximation theory; algorithm NLU exhibits mixed time-scale behavior and biased perturbations, and its analysis requires a different approach that is developed in this paper.", "We present a model for asynchronous distributed computation and then proceed to analyze the convergence of natural asynchronous distributed versions of a large class of deterministic and stochastic gradient-like algorithms. We show that such algorithms retain the desirable convergence properties of their centralized counterparts, provided that the time between consecutive interprocessor communications and the communication delays are not too large.", "We consider solving multi-objective optimization problems in a distributed manner by a network of cooperating and learning agents. The problem is equivalent to optimizing a global cost that is the sum of individual components. The optimizers of the individual components do not necessarily coincide and the network therefore needs to seek Pareto optimal solutions. We develop a distributed solution that relies on a general class of adaptive diffusion strategies. We show how the diffusion process can be represented as the cascade composition of three operators: two combination operators and a gradient descent operator. Using the Banach fixed-point theorem, we establish the existence of a unique fixed point for the composite cascade. We then study how close each agent converges towards this fixed point, and also examine how close the Pareto solution is to the fixed point. We perform a detailed mean-square error analysis and establish that all agents are able to converge to the same Pareto optimal solution within a sufficiently small mean-square-error (MSE) bound even for constant step-sizes. We illustrate one application of the theory to collaborative decision making in finance by a network of agents.", "Random projection algorithm is of interest for constrained optimization when the constraint set is not known in advance or the projection operation on the whole constraint set is computationally prohibitive. This paper presents a distributed random projection algorithm for constrained convex optimization problems that can be used by multiple agents connected over a time-varying network, where each agent has its own objective function and its own constrained set. We prove that the iterates of all agents converge to the same point in the optimal set almost surely. Experiments on distributed support vector machines demonstrate good performance of the algorithm.", "", "", "We consider the problem of distributed estimation, where a set of nodes is required to collectively estimate some parameter of interest from noisy measurements. The problem is useful in several contexts including wireless and sensor networks, where scalability, robustness, and low power consumption are desirable features. Diffusion cooperation schemes have been shown to provide good performance, robustness to node and link failure, and are amenable to distributed implementations. In this work we focus on diffusion-based adaptive solutions of the LMS type. We motivate and propose new versions of the diffusion LMS algorithm that outperform previous solutions. We provide performance and convergence analysis of the proposed algorithms, together with simulation results comparing with existing techniques. We also discuss optimization schemes to design the diffusion LMS weights." ] }
1402.1216
2952740937
Mobile authentication is indispensable for preventing unauthorized access to multi-touch mobile devices. Existing mobile authentication techniques are often cumbersome to use and also vulnerable to shoulder-surfing and smudge attacks. This paper focuses on designing, implementing, and evaluating TouchIn, a two-factor authentication system on multi-touch mobile devices. TouchIn works by letting a user draw on the touchscreen with one or multiple fingers to unlock his mobile device, and the user is authenticated based on the geometric properties of his drawn curves as well as his behavioral and physiological characteristics. TouchIn allows the user to draw on arbitrary regions on the touchscreen without looking at it. This nice sightless feature makes TouchIn very easy to use and also robust to shoulder-surfing and smudge attacks. Comprehensive experiments on Android devices confirm the high security and usability of TouchIn.
The someone-you-are authentication paradigm usually depends on physiological or behavioral biometrics. Physiological biometrics relate to a person's physical features such as fingerprints, iris patterns, retina patterns, facial features, palm prints, hand geometry. These features are difficult to be accurately identified on mobile devices and also susceptible to well-known spoofing mechanisms @cite_23 @cite_27 . In contrast, behavioral biometrics relate to a user's behavioral patterns such as location traces @cite_25 , gaits @cite_17 @cite_18 , keystroke patterns @cite_12 , and touch dynamics @cite_0 @cite_28 . These techniques are best suitable as secondary authentication mechanisms supplementing the primary password-based authentication mechanism, as they may be vulnerable to attackers familiar with the victim's behavioral patterns @cite_25 @cite_0 .
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_0", "@cite_27", "@cite_23", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2169908475", "2404603298", "1976081290", "2154346544", "2158352393", "2699946681", "2164675859", "2151373013" ], "abstract": [ "Research in biometric gait recognition has increased. Earlier gait recognition works reported promising results, usually with a small sample size. Recent studies with a larger sample size confirm gait potential as a biometric from which individuals can be identified. Despite much research being carried out in gait recognition, the topic of vulnerability of gait to attacks has not received enough attention. In this paper, an analysis of minimal-effort impersonation attack and the closest person attack on gait biometrics are presented. Unlike most previous gait recognition approaches, where gait is captured using a (video) camera from a distance, in our approach, gait is collected by an accelerometer sensor attached to the hip of subjects. Hip acceleration in three orthogonal directions (up-down, forward-backward, and sideways) is utilized for recognition. We have collected 760 gait sequences from 100 subjects. The experiments consisted of two parts. In the first part, subjects walked in their normal walking style, and using the averaged cycle method, an EER of about 13 was obtained. In the second part, subjects were trying to walk as someone else. Analysis based on FAR errors indicates that a minimal-effort impersonation attack on gait biometric does not necessarily improve the chances of an impostor being accepted. However, attackers with knowledge of their closest person in the database can be a serious threat to the authentication system.", "The widespread usage of smartphones gives rise to new security and privacy concerns. Smartphones are becoming a personal entrance to networks, and may store private information. Due to its small size, a smartphone could be easily taken away and used by an attacker. Using a victim’s smartphone, the attacker can launch an impersonation attack, which threatens the security of current networks, especially online social networks. Therefore, it is necessary to design a mechanism for smartphones to re-authenticate the current user’s identity and alert the owner when necessary. Such a mechanism can help to inhibit smartphone theft and safeguard the information stored in smartphones. In this paper, we propose a novel biometric-based system to achieve continuous and unobservable re-authentication for smartphones. The system uses a classifier to learn the owner’s finger movement patterns and checks the current user’s finger movement patterns against the owner’s. The system continuously re-authenticates the current user without interrupting user-smartphone interactions. Experiments show that our system is efficient on smartphones and achieves high accuracy.", "Password patterns, as used on current Android phones, and other shape-based authentication schemes are highly usable and memorable. In terms of security, they are rather weak since the shapes are easy to steal and reproduce. In this work, we introduce an implicit authentication approach that enhances password patterns with an additional security layer, transparent to the user. In short, users are not only authenticated by the shape they input but also by the way they perform the input. We conducted two consecutive studies, a lab and a long-term study, using Android applications to collect and log data from user input on a touch screen of standard commercial smartphones. Analyses using dynamic time warping (DTW) provided first proof that it is actually possible to distinguish different users and use this information to increase security of the input while keeping the convenience for the user high.", "As we are surrounded by an ever-larger variety of post-PC devices, the traditional methods for identifying and authenticating users have become cumbersome and time-consuming. In this paper, we present a capacitive communication method through which a device can recognize who is interacting with it. This method exploits the capacitive touchscreens, which are now used in laptops, phones, and tablets, as a signal receiver. The signal that identifies the user can be generated by a small transmitter embedded into a ring, watch, or other artifact carried on the human body. We explore two example system designs with a low-power continuous transmitter that communicates through the skin and a signet ring that needs to be touched to the screen. Experiments with our prototype transmitter and tablet receiver show that capacitive communication through a touchscreen is possible, even without hardware or firmware modifications on a receiver. This latter approach imposes severe limits on the data rate, but the rate is sufficient for differentiating users in multiplayer tablet games or parental control applications. Controlled experiments with a signal generator also indicate that future designs may be able to achieve datarates that are useful for providing less obtrusive authentication with similar assurance as PIN codes or swipe patterns commonly used on smartphones today.", "This paper presents an overview of the weakness of biometric security systems and possible solutions to improve it. Different levels of attack are described, and the strengths and weaknesses of the main biometric system are emphasized. Solutions are provided with special emphasis on cryptography and watermarking techniques.", "", "In this paper we discuss the feasibility of employing keystroke dynamics to perform user verification on mobile phones. Specifically, after having introduced a new statistical classifier, we analyze the discriminative capabilities of the features extracted from the acquired patterns, in order to determine which ones guarantee the best authentication performances. The effectiveness of using template selection techniques for keystroke verification is also investigated. The obtained experimental results indicate that the proposed method can be effectively employed to authenticate mobile phones users, even in operational contexts where the number of enrollment acquisition is kept low.", "Identifying users of portable devices from gait signals acquired with three-dimensional accelerometers was studied. Three approaches, correlation, frequency domain and data distribution statistics, were used. Test subjects (N=36) walked with fast, normal and slow walking speeds in enrolment and test sessions on separate days wearing the accelerometer device on their belt, at back. It was shown to be possible to identify users with this novel gait recognition method. Best equal error rate (EER=7 ) was achieved with the signal correlation method, while the frequency domain method and two variations of the data distribution statistics method produced EER of 10 , 18 and 19 , respectively." ] }
1402.1298
1959879694
We analyze the matrix factorization problem. Given a noisy measurement of a product of two matrices, the problem is to estimate back the original matrices. It arises in many applications, such as dictionary learning, blind matrix calibration, sparse principal component analysis, blind source separation, low rank matrix completion, robust principal component analysis, or factor analysis. It is also important in machine learning: unsupervised representation learning can often be studied through matrix factorization. We use the tools of statistical mechanics—the cavity and replica methods—to analyze the achievability and computational tractability of the inference problems in the setting of Bayes-optimal inference, which amounts to assuming that the two matrices have random-independent elements generated from some known distribution, and this information is available to the inference algorithm. In this setting, we compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm. The computation is based on the asymptotic state-evolution analysis of the algorithm. The performance that our analysis predicts, both in terms of the achieved mean-squared-error and in terms of sample complexity, is extremely promising and motivating for a further development of the algorithm.
The dictionary learning problem was identified in the work of @cite_56 @cite_41 in the context of image representation in the visual cortex, and the problem was studied extensively since @cite_7 . Learning of overcomplete dictionaries for sparse representations of data has many applications, see e.g. @cite_50 . One of the principal algorithms that is used is based on K-SVD @cite_23 . Several authors studied the identifiability of the dictionary under various (in general weaker than ours) assumptions, e.g. @cite_36 @cite_10 @cite_48 @cite_30 @cite_17 @cite_4 @cite_62 . An interesting view on the place of sparse and redundant representations in todays signal processing is given in @cite_35 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_62", "@cite_4", "@cite_7", "@cite_41", "@cite_36", "@cite_48", "@cite_56", "@cite_50", "@cite_23", "@cite_10", "@cite_17" ], "mid": [ "", "1991181922", "2024254345", "2407000478", "2142940228", "2105464873", "2155981690", "18355306", "2145889472", "2140499889", "2160547390", "2963641291", "2119385818" ], "abstract": [ "", "Signal processing relies heavily on data models; these are mathematical constructions imposed on the data source that force a dimensionality reduction of some sort. The vast activity in signal processing during the past decades is essentially driven by an evolution of these models and their use in practice. In that respect, the past decade has been certainly the era of sparse and redundant representations, a popular and highly effective data model. This very appealing model led to a long series of intriguing theoretical and numerical questions, and to many innovative ideas that harness this model to real engineering problems. The new entries recently added to the IEEE-SPL EDICS reflect the popularity of this model and its impact on signal processing research and practice. Despite the huge success of this model so far, this field is still at its infancy, with many unanswered questions still remaining. This paper1 offers a brief presentation of the story of sparse and redundant representation modeling and its impact, and outlines ten key future research directions in this field.", "Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis, non-negative matrix factorization, @math -means clustering, and so on, rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection. While the idealized task would be to optimize the expected quality of the factors over the underlying distribution of training vectors, it is achieved in practice by minimizing an empirical average over the considered collection. The focus of this paper is to provide sample complexity estimates to uniformly control how much the empirical average deviates from the expected cost function. Standard arguments imply that the performance of the empirical predictor also exhibit such guarantees. The level of genericity of the approach encompasses several possible constraints on the factors (tensor product structure, shift-invariance, sparsity…), thus providing a unified perspective on the sample complexity of several widely used matrix factorization schemes. The derived generalization bounds behave proportional to @math with respect to the number of samples @math for the considered matrix factorization techniques.", "", "Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial \"25 words or less\"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations.Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).", "The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. © 1997 Elsevier Science Ltd", "Abstract A full-rank under-determined linear system of equations Ax = b has in general infinitely many possible solutions. In recent years there is a growing interest in the sparsest solution of this equation—the one with the fewest non-zero entries, measured by ∥ x ∥ 0 . Such solutions find applications in signal and image processing, where the topic is typically referred to as “sparse representation”. Considering the columns of A as atoms of a dictionary, it is assumed that a given signal b is a linear composition of few such atoms. Recent work established that if the desired solution x is sparse enough, uniqueness of such a result is guaranteed. Also, pursuit algorithms, approximation solvers for the above problem, are guaranteed to succeed in finding this solution. Armed with these recent results, the problem can be reversed, and formed as an implied matrix factorization problem: Given a set of vectors b i , known to emerge from such sparse constructions, Ax i = b i , with sufficiently sparse representations x i , we seek the matrix A . In this paper we present both theoretical and algorithmic studies of this problem. We establish the uniqueness of the dictionary A , depending on the quantity and nature of the set b i , and the sparsity of x i . We also describe a recently developed algorithm, the K-SVD, that practically find the matrix A , in a manner similar to the K-Means algorithm. Finally, we demonstrate this algorithm on several stylized applications in image processing.", "A popular approach within the signal processing and machine learning communities consists in modelling signals as sparse linear combinations of atoms selected from a learned dictionary. While this paradigm has led to numerous empirical successes in various fields ranging from image to audio processing, there have only been a few theoretical arguments supporting these evidences. In particular, sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. In this paper, we consider a probabilistic model of sparse signals, and show that, with high probability, sparse coding admits a local minimum around the reference dictionary generating the signals. Our study takes into account the case of over-complete dictionaries and noisy signals, thus extending previous work limited to noiseless settings and or under-complete dictionaries. The analysis we conduct is non-asymptotic and makes it possible to understand how the key quantities of the problem, such as the coherence or the level of noise, can scale with respect to the dimension of the signals, the number of atoms, the sparsity and the number of observations.", "THE receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented1–4 and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms5,6. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding7–12. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties13–18, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal8,12 that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs.", "In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.", "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data", "A large set of signals can sometimes be described sparsely using a dictionary, that is, every element can be represented as a linear combination of few elements from the dictionary. Algorithms for various signal processing applications, including classification, denoising and signal separation, learn a dictionary from a given set of signals to be represented. Can we expect that the error in representing by such a dictionary a previously unseen signal from the same source will be of similar magnitude as those for the given examples? We assume signals are generated from a fixed distribution, and study these questions from a statistical learning theory perspective. We develop generalization bounds on the quality of the learned dictionary for two types of constraints on the coefficient selection, as measured by the expected L2 error in representation when the dictionary is used. For the case of l1 regularized coefficient selection we provide a generalization bound of the order of O(√np ln(mλ) m), where n is the dimension, p is the number of elements in the dictionary, λ is a bound on the l1 norm of the coefficient vector and m is the number of samples, which complements existing results. For the case of representing a new signal as a combination of at most k dictionary elements, we provide a bound of the order O(√np ln(mk) m) under an assumption on the closeness to orthogonality of the dictionary (low Babel function). We further show that this assumption holds for most dictionaries in high dimensions in a strong probabilistic sense. Our results also include bounds that converge as 1 m, not previously known for this problem. We provide similar results in a general setting using kernels with weak smoothness requirements.", "In sparse recovery we are given a matrix @math (the dictionary) and a vector of the form @math where @math is sparse, and the goal is to recover @math . This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary @math is unknown and has to be learned from random examples of the form @math where @math is drawn from an appropriate distribution --- this is the dictionary learning problem. In most settings, @math is overcomplete: it has more columns than rows. This paper presents a polynomial-time algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of Spielman, Wang and Wright who gave an algorithm for the full-rank case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo. In particular, a dictionary is @math -incoherent if each pair of columns has inner product at most @math . The algorithm makes natural stochastic assumptions about the unknown sparse vector @math , which can contain @math non-zero entries (for any @math ). This is close to the best @math allowable by the best sparse recovery algorithms even if one knows the dictionary @math exactly. Moreover, both the running time and sample complexity depend on @math , where @math is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on @math , and this is necessary." ] }
1402.1298
1959879694
We analyze the matrix factorization problem. Given a noisy measurement of a product of two matrices, the problem is to estimate back the original matrices. It arises in many applications, such as dictionary learning, blind matrix calibration, sparse principal component analysis, blind source separation, low rank matrix completion, robust principal component analysis, or factor analysis. It is also important in machine learning: unsupervised representation learning can often be studied through matrix factorization. We use the tools of statistical mechanics—the cavity and replica methods—to analyze the achievability and computational tractability of the inference problems in the setting of Bayes-optimal inference, which amounts to assuming that the two matrices have random-independent elements generated from some known distribution, and this information is available to the inference algorithm. In this setting, we compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm. The computation is based on the asymptotic state-evolution analysis of the algorithm. The performance that our analysis predicts, both in terms of the achieved mean-squared-error and in terms of sample complexity, is extremely promising and motivating for a further development of the algorithm.
The closely related problems of sparse principal component analysis or blind source separation is also explored in a number of works, see e.g. @cite_9 @cite_28 @cite_40 @cite_46 @cite_57 . A short survey on the topic with relevant references can be found in @cite_21 .
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_21", "@cite_57", "@cite_40", "@cite_46" ], "mid": [ "2161219071", "2166159048", "1563800729", "2615253071", "2038237443", "2151186643" ], "abstract": [ "The blind source separation problem is to extract the underlying source signals from a set of linear mixtures, where the mixing matrix is unknown. This situation is common in acoustics, radio, medical signal and image processing, hyperspectral imaging, and other areas. We suggest a two-stage separation process: a priori selection of a possibly overcomplete signal dictionary (for instance, a wavelet frame or a learned dictionary) in which the sources are assumed to be sparsely representable, followed by unmixing the sources by exploiting the their sparse representability. We consider the general case of more sources than mixtures, but also derive a more efficient algorithm in the case of a nonovercomplete dictionary and an equal numbers of sources and mixtures. Experiments with artificial signals and musical sounds demonstrate significantly better separation than other known techniques.", "Empirical results were obtained for the blind source separation of more sources than mixtures using a previously proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: (1) learning an overcomplete representation for the observed data and (2) inferring sources given a sparse prior on the coefficients. We demonstrate that three speech signals can be separated with good fidelity given only two mixtures of the three signals. Similar results were obtained with mixtures of two speech signals and one music signal.", "In this survey, we highlight the appealing features and challenges of Sparse Component Analysis (SCA) for blind source separation (BSS). SCA is a simple yet powerful framework to separate several sources from few sensors, even when the independence assumption is dropped. So far, SCA has been most successfully applied when the sources can be represented sparsely in a given basis, but many other potential uses of SCA remain unexplored. Among other challenging perspectives, we discuss how SCA could be used to exploit both the spatial diversity corresponding to the mixing process and the morphological diversity between sources to unmix even underdetermined convolutive mixtures. This raises several challenges, including the design of both provably good and numerically efficient algorithms for large-scale sparse approximation with overcomplete signal dictionaries.", "Given a covariance matrix, we consider the problem of maximizing the variance explained by a particular linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This problem arises in the decomposition of a covariance matrix into sparse factors or sparse principal component analysis (PCA), and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming-based relaxation for our problem. We also discuss Nesterov's smooth minimization technique applied to the semidefinite program arising in the semidefinite relaxation of the sparse PCA problem. The method has complexity @math , where @math is the size of the underlying covariance matrix and @math is the desired absolute accuracy on the optimal value of the problem.", "The scope of this work is the separation of N sources from M linear mixtures when the underlying system is underdetermined, that is, when Mi N. If the input distribution is sparse the mixing matrix can be estimated either by external optimization or by clustering and, given the mixing matrix, a minimal l1 norm representation of the sources can be obtained by solving a low-dimensional linear programming problem for each of the data points. Yet, when the signals per se do not satisfy this assumption, sparsity can still be achieved by realizing the separation in a sparser transformed domain. The approach is illustrated here for M = 2. In this case we estimate both the number of sources and the mixing matrix by the maxima of a potential function along the circle of unit length, and we obtain the minimal l1 norm representation of each data point by a linear combination of the pair of basis vectors that enclose it. Several experiments with music and speech signals show that their time-domain representation is not sparse enough. Yet, excellent results were obtained using their short-time Fourier transform, including the separation of up to six sources from two mixtures. ? 2001 Elsevier Science B.V. All rights reserved.", "In this letter, we solve the problem of identifying matrices S spl isin spl Ropf sup n spl times N and A spl isin spl Ropf sup m spl times n knowing only their multiplication X = AS, under some conditions, expressed either in terms of A and sparsity of S (identifiability conditions), or in terms of X (sparse component analysis (SCA) conditions). We present algorithms for such identification and illustrate them by examples." ] }
1402.1298
1959879694
We analyze the matrix factorization problem. Given a noisy measurement of a product of two matrices, the problem is to estimate back the original matrices. It arises in many applications, such as dictionary learning, blind matrix calibration, sparse principal component analysis, blind source separation, low rank matrix completion, robust principal component analysis, or factor analysis. It is also important in machine learning: unsupervised representation learning can often be studied through matrix factorization. We use the tools of statistical mechanics—the cavity and replica methods—to analyze the achievability and computational tractability of the inference problems in the setting of Bayes-optimal inference, which amounts to assuming that the two matrices have random-independent elements generated from some known distribution, and this information is available to the inference algorithm. In this setting, we compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm. The computation is based on the asymptotic state-evolution analysis of the algorithm. The performance that our analysis predicts, both in terms of the achieved mean-squared-error and in terms of sample complexity, is extremely promising and motivating for a further development of the algorithm.
Matrix completion is another problem that belongs to the class treated in this paper. Again, many important works were devoted to this problem giving theoretical guarantees, algorithm and applications, see e.g. @cite_54 @cite_27 @cite_39 @cite_53 .
{ "cite_N": [ "@cite_27", "@cite_54", "@cite_53", "@cite_39" ], "mid": [ "2134332047", "2611328865", "2144730813", "2047071281" ], "abstract": [ "This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible, but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply finds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nr polylog(n).", "We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys @math for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.", "Let M be an n? × n matrix of rank r, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm, which we call OptSpace, that reconstructs M from |E| = O(rn) observed entries with relative root mean square error 1 2 RMSE ? C(?) (nr |E|)1 2 with probability larger than 1 - 1 n3. Further, if r = O(1) and M is sufficiently unstructured, then OptSpace reconstructs it exactly from |E| = O(n log n) entries with probability larger than 1 - 1 n3. This settles (in the case of bounded rank) a question left open by Candes and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek on the spectrum of sparse random matrices.", "On the heels of compressed sensing, a new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries. It comes up in many areas of science and engineering, including collaborative filtering, machine learning, control, remote sensing, and computer vision, to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown matrix of low rank from just about log noisy samples with an error that is proportional to the noise level. We present numerical results that complement our quantitative analysis and show that, in practice, nuclear-norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout." ] }
1402.1298
1959879694
We analyze the matrix factorization problem. Given a noisy measurement of a product of two matrices, the problem is to estimate back the original matrices. It arises in many applications, such as dictionary learning, blind matrix calibration, sparse principal component analysis, blind source separation, low rank matrix completion, robust principal component analysis, or factor analysis. It is also important in machine learning: unsupervised representation learning can often be studied through matrix factorization. We use the tools of statistical mechanics—the cavity and replica methods—to analyze the achievability and computational tractability of the inference problems in the setting of Bayes-optimal inference, which amounts to assuming that the two matrices have random-independent elements generated from some known distribution, and this information is available to the inference algorithm. In this setting, we compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm. The computation is based on the asymptotic state-evolution analysis of the algorithm. The performance that our analysis predicts, both in terms of the achieved mean-squared-error and in terms of sample complexity, is extremely promising and motivating for a further development of the algorithm.
Another related problem is the robust principal component analysis that was also studied by many authors; algorithms and theoretical limits were analyzed in @cite_5 @cite_55 @cite_33 @cite_42 .
{ "cite_N": [ "@cite_5", "@cite_55", "@cite_42", "@cite_33" ], "mid": [ "2557168633", "2889106020", "2145962650", "2131628350" ], "abstract": [ "We consider the following fundamental problem: given a matrix that is the sum of an unknown sparse matrix and an unknown low-rank matrix, is it possible to exactly recover the two components? Such a capability enables a considerable number of applications, but the goal is both ill-posed and NP-hard in general. In this paper we develop (a) a new uncertainty principle for matrices, and (b) a simple method for exact decomposition based on convex optimization. Our uncertainty principle is a quantification of the notion that a matrix cannot be sparse while having diffuse row column spaces. It characterizes when the decomposition problem is ill-posed, and forms the basis for our decomposition method and its analysis. We provide deterministic conditions — on the sparse and low-rank components — under which our method guarantees exact recovery.", "", "This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.", "Principal component analysis is a fundamental operation in computational data analysis, with myriad applications ranging from web search to bioinformatics to computer vision and image analysis. However, its performance and applicability in real scenarios are limited by a lack of robustness to outlying or corrupted observations. This paper considers the idealized \"robust principal component analysis\" problem of recovering a low rank matrix A from corrupted observations D = A + E. Here, the corrupted entries E are unknown and the errors can be arbitrarily large (modeling grossly corrupted observations common in visual and bioinformatic data), but are assumed to be sparse. We prove that most matrices A can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which we give a fast and provably convergent algorithm. Our result holds even when the rank of A grows nearly proportionally (up to a logarithmic factor) to the dimensionality of the observation space and the number of errors E grows in proportion to the total number of entries in the matrix. A by-product of our analysis is the first proportional growth results for the related problem of completing a low-rank matrix from a small fraction of its entries. Simulations and real-data examples corroborate the theoretical results, and suggest potential applications in computer vision." ] }
1402.1557
2023687776
This paper provides a unified framework to study the performance of successive interference cancellation (SIC) in wireless networks with arbitrary fading distribution and power-law path loss. An analytical characterization of the performance of SIC is given as a function of different system parameters. The results suggest that the marginal benefit of enabling the receiver to successively decode k users diminishes very fast with k, especially in networks of high dimensions and small path loss exponent. On the other hand, SIC is highly beneficial when the users are clustered around the receiver and or very low-rate codes are used. Also, with multiple packet reception, a lower per-user information rate always results in higher aggregate throughput in interference-limited networks. In contrast, there exists a positive optimal per-user rate that maximizes the aggregate throughput in noisy networks. The analytical results serve as useful tools to understand the potential gain of SIC in heterogeneous cellular networks (HCNs). Using these tools, this paper quantifies the gain of SIC on the coverage probability in HCNs with non-accessible base stations. An interesting observation is that, for contemporary narrow-band systems (e.g., LTE and WiFi), most of the gain of SIC is achieved by canceling a single interferer.
Besides SIC, there are many other techniques that can potentially significantly mitigate the interference in wireless networks including interference alignment @cite_14 and dirty paper coding @cite_2 . Despite the huge promise in terms of performance gain, these techniques typically rely heavily on accurate channel state information at the transmitters (CSIT) and thus are less likely to impact practical wireless systems in the near future @cite_35 @cite_34 . Also, many recent works study interference cancellation based on MIMO techniques in the context of random wireless networks, @cite_34 @cite_28 and references therein. These (linear) interference cancellation techniques should not be considered as successive interference cancellation (SIC), although they can be combined with SIC to achieve (even) better performance @cite_8 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_8", "@cite_28", "@cite_2", "@cite_34" ], "mid": [ "2066452435", "2168448439", "1997834106", "1993190681", "1976109068", "2139944159" ], "abstract": [ "Interference plays a crucial role for performance degradation in communication networks nowadays. An appealing approach to interference avoidance is the Interference Cancellation (IC) methodology. Particularly, the Successive IC (SIC) method represents the most effective IC-based reception technique in terms of Bit-Error-Rate (BER) performance and, thus, yielding to the overall system robustness. Moreover, SIC in conjunction with Orthogonal Frequency Division Multiplexing (OFDM), in the context of SIC-OFDM, is shown to approach the Shannon capacity when single-antenna infrastructures are applied while this capacity limit can be further extended with the aid of multiple antennas. Recently, SIC-based reception has studied for Orthogonal Frequency and Code Division Multiplexing or (spread-OFDM systems), namely OFCDM. Such systems provide extremely high error resilience and robustness, especially in multi-user environments. In this paper, we present a comprehensive survey on the performance of SIC for single- and multiple-antenna OFDM and spread OFDM (OFCDM) systems. Thereby, we focus on all the possible OFDM formats that have been developed so far. We study the performance of SIC by examining closely two major aspects, namely the BER performance and the computational complexity of the reception process, thus striving for the provision and optimization of SIC. Our main objective is to point out the state-of-the-art on research activity for SIC-OF(C)DM systems, applied on a variety of well-known network implementations, such as cellular, ad hoc and infrastructure-based platforms. Furthermore, we introduce a Performance-Complexity Tradeoff (PCT) in order to indicate the contribution of the approaches studied in this paper. Finally, we provide analytical performance comparison tables regarding to the surveyed techniques with respect to the PCT level.", "We explore the degrees of freedom of M times N user wireless X networks, i.e., networks of M transmitters and N receivers where every transmitter has an independent message for every receiver. We derive a general outer bound on the degrees of freedom region of these networks. When all nodes have a single antenna and all channel coefficients vary in time or frequency, we show that the total number of degrees of freedom of the X network is equal to [(MN) (M+N-1)] per orthogonal time and frequency dimension. Achievability is proved by constructing interference alignment schemes for X networks that can come arbitrarily close to the outer bound on degrees of freedom. For the case where either M=2 or N=2 we find that the degrees of freedom characterization also provides a capacity approximation that is accurate to within O(1) . For these cases the degrees of freedom outer bound is exactly achievable.", "1. Introduction 2. The wireless channel 3. Point-to-point communication: detection, diversity and channel uncertainty 4. Cellular systems: multiple access and interference management 5. Capacity of wireless channels 6. Multiuser capacity and opportunistic communication 7. MIMO I: spatial multiplexing and channel modeling 8. MIMO II: capacity and multiplexing architectures 9. MIMO III: diversity-multiplexing tradeoff and universal space-time codes 10. MIMO IV: multiuser communication A. Detection and estimation in additive Gaussian noise B. Information theory background.", "The transmission capacity of an ad-hoc network is the maximum density of active transmitters per unit area, given an outage constraint at each receiver for a fixed rate of transmission. Assuming that the transmitter locations are distributed as a Poisson point process, this paper derives upper and lower bounds on the transmission capacity of an ad-hoc network when each node is equipped with multiple antennas. The transmitter either uses eigen multi-mode beamforming or a subset of its antennas without channel information to transmit multiple data streams, while the receiver uses partial zero forcing to cancel certain interferers using some of its spatial receive degrees of freedom (SRDOF). The receiver either cancels the nearest interferers or those interferers that maximize the post-cancellation signal-to-interference ratio. Using the obtained bounds, the optimal number of data streams to transmit, and the optimal SRDOF to use for interference cancellation are derived that provide the best scaling of the transmission capacity with the number of antennas. With beamforming, single data stream transmission together with using all but one SRDOF for interference cancellation is optimal, while without beamforming, single data stream transmission together with using a fraction of the total SRDOF for interference cancellation is optimal.", "A channel with output Y = X + S + Z is examined, The state S N(0, QI) and the noise Z N(0, NI) are multivariate Gaussian random variables ( I is the identity matrix.). The input X R^ n satisfies the power constraint (l n) i=1 ^ n X_ i ^ 2 P . If S is unknown to both transmitter and receiver then the capacity is 1 2 (1 + P ( N + Q)) nats per channel use. However, if the state S is known to the encoder, the capacity is shown to be C^ = 1 2 (1 + P N) , independent of Q . This is also the capacity of a standard Gaussian channel with signal-to-noise power ratio P N . Therefore, the state S does not affect the capacity of the channel, even though S is unknown to the receiver. It is shown that the optimal transmitter adapts its signal to the state S rather than attempting to cancel it.", "Interference between nodes is a critical impairment in mobile ad hoc networks. This paper studies the role of multiple antennas in mitigating such interference. Specifically, a network is studied in which receivers apply zero-forcing beamforming to cancel the strongest interferers. Assuming a network with Poisson-distributed transmitters and independent Rayleigh fading channels, the transmission capacity is derived, which gives the maximum number of successful transmissions per unit area. Mathematical tools from stochastic geometry are applied to obtain the asymptotic transmission capacity scaling and characterize the impact of inaccurate channel state information (CSI). It is shown that, if each node cancels interferers, the transmission capacity decreases as as the outage probability vanishes. For fixed , as grows, the transmission capacity increases as where is the path-loss exponent. Moreover, CSI inaccuracy is shown to have no effect on the transmission capacity scaling as vanishes, provided that the CSI training sequence has an appropriate length, which we derive. Numerical results suggest that canceling merely one interferer by each node may increase the transmission capacity by an order of magnitude or more, even when the CSI is imperfect." ] }
1402.1572
2030609192
This paper studies the two-user interference channel with unilateral source cooperation, which consists of two source-destination pairs that share the same channel and where one full-duplex source can overhear the other source through a noisy in-band link. Novel outer bounds of the type 2R1 + R2 and R1 + 2R2 are developed for the class of injective semi-deterministic channels with independent noises at the different source-destination pairs. The bounds are then specialized to the Gaussian noise case. Interesting insights are provided about when these types of bounds are active, or in other words, when unilateral cooperation is too weak and leaves some system resources underutilized.
The Interference Channel (IC) with unilateral source cooperation is a special case of the IC with generalized feedback, or bilateral source cooperation. For this network, several outer bounds on the capacity have been derived @cite_9 @cite_5 @cite_11 . A number of schemes, seeking to match these outer bounds, have been developed as well. For example, @cite_4 proposed a strategy that exploits rate splitting, superposition coding, partial-decode-and-forward relaying, and Gelfand-Pinsker binning. This strategy, specialized to the Gaussian noise channel, turned out to match the sum-rate outer bounds of @cite_11 @cite_5 to within 19 bits under the assumption of equally strong cooperation links with arbitrary direct and interfering links @cite_5 and to within 4 bits in the strong cooperation regime' with symmetric direct links and symmetric interfering links @cite_8 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_5", "@cite_11" ], "mid": [ "2101926569", "2547089102", "2100613504", "2131972933", "2103372995" ], "abstract": [ "An Interference Channel with Generalized Feedback (IFC-GF) models a wireless network where the sources can sense the channel activity. The signal overheard from the channel provides information about the activity of the other sources and thus furnishes the basis for cooperation. This two-part paper studies achievable strategies (Part I) and outer bounds (Part II) for the general discrete memoryless IFC-GF with two source-destination pairs. In Part I, the generalized feedback is used to gain knowledge about the message sent by the other source and then exploited in two ways: a) to relay the messages that can be decoded at both destinations, thus realizing the gains of beam-forming of a distributed multiantenna system, and b) to hide the messages that can not be decoded at the nonintended destination, thus leveraging the interference “precancellation” property of dirty-paper coding. We show that our achievable region generalizes several known achievable regions for the IFC-GF and that it reduces to known achievable regions for the channels subsumed by the IFC-GF model. For the Gaussian channel, it is shown that source cooperation enlarges the achievable rate region of the corresponding IFC without generalized feedback cooperation.", "The capacity region of the Gaussian interference channel with two source-destination pairs where the sources can cooperate is not known. Prabhakaran and Viswanath showed that the sum-rate capacity can be achieved to within 19 bits s Hz by using a combination of superposition coding and zero-forcing. This paper only focuses on the symmetric capacity, i.e., the maximum equal rate point that the two sources can simultaneously achieve in a Gaussian interference channel with symmetric channel gains, in the strong cooperation regime, i.e., when the cooperation link is stronger than either the direct link or the interference link. In this setting, it is showed that the “binning+superposition” achievable scheme of Yang and Tuninetti, implemented with Gaussian codebooks and Dirty Paper Coding, achieves the symmetric capacity to within 2 bits s Hz-thereby considerably reducing the previously known gap. The extension of this gap-result to other regimes is also discussed.", "In a cooperative diversity network, users cooperate to transmit each others' messages; to some extent nodes therefore collectively act as an antenna array and create a virtual or distributed multiple-input multiple-output (MIMO) system. In this paper, upper and lower bounds for the information-theoretic capacity of four-node ad hoc networks with two transmitters and two receivers using cooperative diversity are derived. One of the gains in a true MIMO system is a multiplexing gain in the high signal-to-noise ratio (SNR) regime, an extra factor in front of the log in the capacity expression. It is shown that cooperative diversity gives no such multiplexing gain, but it does give a high SNR additive gain, which is characterized in the paper", "In this paper, the role of cooperation in managing interference-a fundamental feature of the wireless channel-is investigated by studying the two-user Gaussian interference channel where the source nodes can both transmit and receive in full duplex. The sum capacity of this channel is obtained within a gap of a constant number of bits. The coding scheme used builds up on the superposition scheme of Han and Kobayashi for the two-user interference channel without cooperation. New upperbounds on the sum capacity are also derived. The same coding scheme is shown to obtain the sum capacity of the symmetric two-user Gaussian interference channel with noiseless feedback within a constant gap.", "Interference Channels with Generalized Feedback (IFC-GF) are a model for wireless communication systems with source cooperation. GF enables to enlarge the achievable rate region with respect to the non-cooperative IFC without requiring an increase in system resources. This paper develops an outer bound region on the capacity of general IFC-GF and then tighten it further for a class of semi-deterministic IFC-GF that include the “high SNR approximation” of the Gaussian channel and the Gaussian channel as special cases." ] }
1402.1572
2030609192
This paper studies the two-user interference channel with unilateral source cooperation, which consists of two source-destination pairs that share the same channel and where one full-duplex source can overhear the other source through a noisy in-band link. Novel outer bounds of the type 2R1 + R2 and R1 + 2R2 are developed for the class of injective semi-deterministic channels with independent noises at the different source-destination pairs. The bounds are then specialized to the Gaussian noise case. Interesting insights are provided about when these types of bounds are active, or in other words, when unilateral cooperation is too weak and leaves some system resources underutilized.
Source cooperation includes classical feedback as special case. @cite_6 determined the capacity to within 2 bits of the IC where each source has perfect output feedback from the intended destination; it showed that @math -type bounds are not needed because output feedback eliminates resource holes,'' or system underutilization due to distributed processing captured by the @math bounds. In @cite_12 , the authors studied the symmetric Gaussian channel with all possible output feedback configurations. It showed that the bounds developed in @cite_6 suffice for constant gap characterization except in the case of single direct feedback link model (1000).' [Theorem IV.1] SahaiIT2013 proposed a novel outer bound on @math for the injective semi-deterministic channel, to capture the fact that one to the second sources does not receive help. @cite_1 characterized the capacity of the symmetric linear deterministic IC with degraded output feedback' by developing bounds on @math , whose extension to the Gaussian noise case was left open. In this work we extend the results of @cite_1 @cite_12 to all injective semi-deterministic channels for which, roughly speaking, the noises at the different source-destination pairs are independent.
{ "cite_N": [ "@cite_1", "@cite_12", "@cite_6" ], "mid": [ "2555443877", "2090800027", "2169440553" ], "abstract": [ "The linear deterministic interference channel (LD-IC) with partial feedback is considered. Partial feedback for the LD-IC models a scenario in which the top l most-significant-bits of the channel output of receiver j are received as feedback at transmitter j, for j = 1, 2. The rationale for studying the LD-IC with partial feedback comes from the fact that it is a good approximation to the Gaussian interference channel with output feedback corrupted by additive white Gaussian noise (commonly referred to as noisy feedback). The main contribution of this paper is to characterize the capacity region of the symmetric LD-IC with partial feedback. The main ingredient of the proof is to obtain novel upper bounds on weighted rates 2R 1 + R 2 and R 1 + 2R 2 .", "In this paper, we study the impact of different channel output feedback architectures on the capacity of the two-user interference channel. For a two-user interference channel, a feedback link can exist between receivers and transmitters in nine canonical architectures (see Fig. 3 ), ranging from only one feedback link to four feedback links. We derive the exact capacity region for the symmetric deterministic interference channel and the constant-gap capacity region for the symmetric Gaussian interference channel for all of the nine architectures. We show that for a linear deterministic symmetric interference channel, in the weak interference regime, all models of feedback, except the one, which has only one of the receivers feeding back to its own transmitter, have the identical capacity region. When only one of the receivers feeds back to its own transmitter, the capacity region is a strict subset of the capacity region of the rest of the feedback models in the weak interference regime. However, the sum-capacity of all feedback models is identical in the weak interference regime. Moreover, in the strong interference regime, all models of feedback with at least one of the receivers feeding back to its own transmitter have the identical sum-capacity. For the Gaussian interference channel, the results of the linear deterministic model follow, where capacity is replaced with approximate capacity.", "We characterize the capacity region to within 2 bits s Hz and the symmetric capacity to within 1 bit s Hz for the two-user Gaussian interference channel (IC) with feedback. We develop achievable schemes and derive a new outer bound to arrive at this conclusion. One consequence of the result is that feedback provides multiplicative gain at high signal-to-noise ratio: the gain becomes arbitrarily large for certain channel parameters. This finding is in contrast to point-to-point and multiple-access channels where feedback provides no gain and only bounded additive gain respectively. The result makes use of a linear deterministic model to provide insights into the Gaussian channel. This deterministic model is a special case of the El Gamal-Costa deterministic model and as a side-generalization, we establish the exact feedback capacity region of this general class of deterministic ICs." ] }
1402.0796
1610215495
One of the most tedious tasks in the application of machine learning is model selection, i.e. hyperparameter selection. Fortunately, recent progress has been made in the automation of this process, through the use of sequential model-based optimization (SMBO) methods. This can be used to optimize a cross-validation performance of a learning algorithm over the value of its hyperparameters. However, it is well known that ensembles of learned models almost consistently outperform a single model, even if properly selected. In this paper, we thus propose an extension of SMBO methods that automatically constructs such ensembles. This method builds on a recently proposed ensemble construction paradigm known as agnostic Bayesian learning. In experiments on 22 regression and 39 classification data sets, we confirm the success of this proposed approach, which is able to outperform model selection with SMBO.
On the other hand, traditional ensemble methods such as , , and @cite_3 require a predefined set of models and are not straightforward to adapt to an infinite set of models.
{ "cite_N": [ "@cite_3" ], "mid": [ "1678356000" ], "abstract": [ "Function estimation approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest-descent minimization. A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion. Specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such TreeBoost models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire and Friedman, Hastie and Tibshirani are discussed." ] }
1402.0728
2950772259
We assume that recommender systems are more successful, when they are based on a thorough understanding of how people process information. In the current paper we test this assumption in the context of social tagging systems. Cognitive research on how people assign tags has shown that they draw on two interconnected levels of knowledge in their memory: on a conceptual level of semantic fields or topics, and on a lexical level that turns patterns on the semantic level into words. Another strand of tagging research reveals a strong impact of time dependent forgetting on users' tag choices, such that recently used tags have a higher probability being reused than "older" tags. In this paper, we align both strands by implementing a computational theory of human memory that integrates the two-level conception and the process of forgetting in form of a tag recommender and test it in three large-scale social tagging datasets (drawn from BibSonomy, CiteULike and Flickr). As expected, our results reveal a selective effect of time: forgetting is much more pronounced on the lexical level of tags. Second, an extensive evaluation based on this observation shows that a tag recommender interconnecting both levels and integrating time dependent forgetting on the lexical level results in high accuracy predictions and outperforms other well-established algorithms, such as Collaborative Filtering, Pairwise Interaction Tensor Factorization, FolkRank and two alternative time dependent approaches. We conclude that tag recommenders can benefit from going beyond the manifest level of word co-occurrences, and from including forgetting processes on the lexical level.
In contrast to this study, previous research on tag recommender systems has taken a more pragmatist stance, typically ignoring cognitive psychological models that can help in explaining how people tag (as it was shown in this work). To date, the two following approaches have been established -- folksonomy-based and content-based tag recommender approaches @cite_28 . In our work we focus on folksonomy-based approaches.
{ "cite_N": [ "@cite_28" ], "mid": [ "2274024856" ], "abstract": [ ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x List of Symbols and Abbreviations Used . . . . . . . . . . . . . . . . . . xi Chapter" ] }
1402.0728
2950772259
We assume that recommender systems are more successful, when they are based on a thorough understanding of how people process information. In the current paper we test this assumption in the context of social tagging systems. Cognitive research on how people assign tags has shown that they draw on two interconnected levels of knowledge in their memory: on a conceptual level of semantic fields or topics, and on a lexical level that turns patterns on the semantic level into words. Another strand of tagging research reveals a strong impact of time dependent forgetting on users' tag choices, such that recently used tags have a higher probability being reused than "older" tags. In this paper, we align both strands by implementing a computational theory of human memory that integrates the two-level conception and the process of forgetting in form of a tag recommender and test it in three large-scale social tagging datasets (drawn from BibSonomy, CiteULike and Flickr). As expected, our results reveal a selective effect of time: forgetting is much more pronounced on the lexical level of tags. Second, an extensive evaluation based on this observation shows that a tag recommender interconnecting both levels and integrating time dependent forgetting on the lexical level results in high accuracy predictions and outperforms other well-established algorithms, such as Collaborative Filtering, Pairwise Interaction Tensor Factorization, FolkRank and two alternative time dependent approaches. We conclude that tag recommenders can benefit from going beyond the manifest level of word co-occurrences, and from including forgetting processes on the lexical level.
The probably most prominent work in this context is the work of @cite_3 who introduced an algorithm called FolkRank (FR) that has established itself as the most prominent benchmarking tag recommender approach over the past few years. Subsequent and other popular works in this context are the studies of J " a @cite_15 or Hamouda & Wanas @cite_20 who introduced a set of Collaborative Filtering (CF) approaches for the problem of recommending tags to the user in a personalized manner. More recent and to some extent also well-know works are e.g., the studies of @cite_24 , @cite_14 , @cite_26 , @cite_11 or @cite_25 who introduce a factorization model, a semantic model (based on LDA), a link prediction model or a time-based model to recommend tags to users (see Section ).
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_3", "@cite_24", "@cite_15", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2031237011", "", "", "2089349245", "1549874165", "2152674497", "1977173018", "2028373970" ], "abstract": [ "Tagging systems have become major infrastructures on the Web. They allow users to create tags that annotate and categorize content and share them with other users, very helpful in particular for searching multimedia content. However, as tagging is not constrained by a controlled vocabulary and annotation guidelines, tags tend to be noisy and sparse. Especially new resources annotated by only a few users have often rather idiosyncratic tags that do not reflect a common perspective useful for search. In this paper we introduce an approach based on Latent Dirichlet Allocation (LDA) for recommending tags of resources in order to improve search. Resources annotated by many users and thus equipped with a fairly stable and complete tag set are used to elicit latent topics to which new resources with only a few tags are mapped. Based on this, other tags belonging to a topic can be recommended for the new resource. Our evaluation shows that the approach achieves significantly better precision and recall than the use of association rules, suggested in previous work, and also recommends more specific tags. Moreover, extending resources with these recommended tags significantly improves search for new resources.", "", "", "Tagging plays an important role in many recent websites. Recommender systems can help to suggest a user the tags he might want to use for tagging a specific item. Factorization models based on the Tucker Decomposition (TD) model have been shown to provide high quality tag recommendations outperforming other approaches like PageRank, FolkRank, collaborative filtering, etc. The problem with TD models is the cubic core tensor resulting in a cubic runtime in the factorization dimension for prediction and learning. In this paper, we present the factorization model PITF (Pairwise Interaction Tensor Factorization) which is a special case of the TD model with linear runtime both for learning and prediction. PITF explicitly models the pairwise interactions between users, items and tags. The model is learned with an adaption of the Bayesian personalized ranking (BPR) criterion which originally has been introduced for item recommendation. Empirically, we show on real world datasets that this model outperforms TD largely in runtime and even can achieve better prediction quality. Besides our lab experiments, PITF has also won the ECML PKDD Discovery Challenge 2009 for graph-based tag recommendation.", "Collaborative tagging systems allow users to assign keywords--so called \"tags\"--to resources. Tags are used for navigation, finding resources and serendipitous browsing and thus provide an immediate benefit for users. These systems usually include tag recommendation mechanisms easing the process of finding good tags for a resource, but also consolidating the tag vocabulary across users. In practice, however, only very basic recommendation strategies are applied. In this paper we evaluate and compare two recommendation algorithms on large-scale real life datasets: an adaptation of user-based collaborative filtering and a graph-based recommender built on top of FolkRank. We show that both provide better results than non-personalized baseline methods. Especially the graph-based recommender outperforms existing methods considerably.", "The emergence of social tagging systems enables users to organize and share their interested resources. In order to ease the human-computer interaction with such systems, extensive researches have been done on how to recommend personalized tags for rescources. These researches mainly consider user profile, resource content, or the graph structure of users, resources and tags. Users' preferences towards different tags are usually regarded as invariable against time, neglecting the switch of users' short-term interests. In this paper, we examine the temporal factor in users' tagging behaviors by investigating the occurrence patterns of tags and then incorporate this into a novel method for ranking tags. To assess a tag for a user-resource pair, we first consider the user's general interest in it, then we calculate its recurrence probability based on the temporal usage pattern, and at last we consider its tag relevance to the content of the post. Experiments conducted on real datasets from Bibsonomy and Delicious demonstrate that our method outperforms other temporal models and state-of-the-art tag prediction methods.", "More and more content on the Web is generated by users. To organize this information and make it accessible via current search technology, tagging systems have gained tremendous popularity. Especially for multimedia content they allow to annotate resources with keywords (tags) which opens the door for classic text-based information retrieval. To support the user in choosing the right keywords, tag recommendation algorithms have emerged. In this setting, not only the content is decisive for recommending relevant tags but also the user's preferences. In this paper we introduce an approach to personalized tag recommendation that combines a probabilistic model of tags from the resource with tags from the user. As models we investigate simple language models as well as Latent Dirichlet Allocation. Extensive experiments on a real world dataset crawled from a big tagging system show that personalization improves tag recommendation, and our approach significantly outperforms state-of-the-art approaches.", "In social bookmarking systems, existing methods in tag prediction have shown that the performance of prediction can be significantly improved by modeling users' preferences. However, these preferences are usually treated as constant over time, neglecting the temporal factor within users' behaviors. In this paper, we study the problem of session-like behavior in social tagging systems and demonstrate that the predictive performance can be improved by considering sessions. Experiments, conducted on three public datasets, show that our session-based method can outperform baselines and two state-of-the-art algorithms significantly." ] }
1402.0728
2950772259
We assume that recommender systems are more successful, when they are based on a thorough understanding of how people process information. In the current paper we test this assumption in the context of social tagging systems. Cognitive research on how people assign tags has shown that they draw on two interconnected levels of knowledge in their memory: on a conceptual level of semantic fields or topics, and on a lexical level that turns patterns on the semantic level into words. Another strand of tagging research reveals a strong impact of time dependent forgetting on users' tag choices, such that recently used tags have a higher probability being reused than "older" tags. In this paper, we align both strands by implementing a computational theory of human memory that integrates the two-level conception and the process of forgetting in form of a tag recommender and test it in three large-scale social tagging datasets (drawn from BibSonomy, CiteULike and Flickr). As expected, our results reveal a selective effect of time: forgetting is much more pronounced on the lexical level of tags. Second, an extensive evaluation based on this observation shows that a tag recommender interconnecting both levels and integrating time dependent forgetting on the lexical level results in high accuracy predictions and outperforms other well-established algorithms, such as Collaborative Filtering, Pairwise Interaction Tensor Factorization, FolkRank and two alternative time dependent approaches. We conclude that tag recommenders can benefit from going beyond the manifest level of word co-occurrences, and from including forgetting processes on the lexical level.
Although the latter mentioned approaches perform more or less well in accurately predicting the users tags, all of them ignore well-established and long standing research from cognitive psychology on how humans process information. To bridge this gap we have recently introduced two simple and psychological plausible methods @cite_23 @cite_13 (= 3L and BLL+C) that are able (with limitations) to explain memory processes in social tagging systems. Based on these studies and new observations made in the current work, we were able to present a novel time-based tag recommender algorithm (= 3LT @math ) in the end that significantly outperforms the state-of-the-art.
{ "cite_N": [ "@cite_13", "@cite_23" ], "mid": [ "2949180239", "2077699614" ], "abstract": [ "In this paper, we introduce a tag recommendation algorithm that mimics the way humans draw on items in their long-term memory. This approach uses the frequency and recency of previous tag assignments to estimate the probability of reusing a particular tag. Using three real-world folksonomies gathered from bookmarks in BibSonomy, CiteULike and Flickr, we show how adding a time-dependent component outperforms conventional \"most popular tags\" approaches and another existing and very effective but less theory-driven, time-dependent recommendation mechanism. By combining our approach with a simple resource-specific frequency analysis, our algorithm outperforms other well-established algorithms, such as FolkRank, Pairwise Interaction Tensor Factorization and Collaborative Filtering. We conclude that our approach provides an accurate and computationally efficient model of a user's temporal tagging behavior. We show how effective principles for information retrieval can be designed and implemented if human memory processes are taken into account.", "When interacting with social tagging systems, humans exercise complex processes of categorization that have been the topic of much research in cognitive science. In this paper we present a recommender approach for social tags derived from ALCOVE, a model of human category learning. The basic architecture is a simple three-layers connectionist model. The input layer encodes patterns of semantic features of a user-specific resource, such as latent topics elicited through Latent Dirichlet Allocation (LDA) or available external categories. The hidden layer categorizes the resource by matching the encoded pattern against already learned exemplar patterns. The latter are composed of unique feature patterns and associated tag distributions. Finally, the output layer samples tags from the associated tag distributions to verbalize the preceding categorization process. We have evaluated this approach on a real-world folksonomy gathered from Wikipedia bookmarks in Delicious. In the experiment our approach outperformed LDA, a well-established algorithm. We attribute this to the fact that our approach processes semantic information (either latent topics or external categories) across the three different layers. With this paper, we demonstrate that a theoretically guided design of algorithms not only holds potential for improving existing recommendation mechanisms, but it also allows us to derive more generalizable insights about how human information interaction on the Web is determined by both semantic and verbal processes." ] }
1402.0601
2951340236
The paper considers the complexity of verifying that a finite state system satisfies a number of definitions of information flow security. The systems model considered is one in which agents operate synchronously with awareness of the global clock. This enables timing based attacks to be captured, whereas previous work on this topic has dealt primarily with asynchronous systems. Versions of the notions of nondeducibility on inputs, nondeducibility on strategies, and an unwinding based notion are formulated for this model. All three notions are shown to be decidable, and their computational complexity is characterised.
A number of works have defined notions of security for synchronous or timed systems, but fewer complexity results are known. K "o pf and Basin @cite_23 define a notion similar to @math and show it is PTIME decidable. Similar definitions are also used in the literature on language-based security @cite_15 @cite_12 .
{ "cite_N": [ "@cite_15", "@cite_12", "@cite_23" ], "mid": [ "1997775274", "1518533182", "" ], "abstract": [ "One aspect of security in mobile code is privacy: private (or secret) data should not be leaked to unauthorised agents. Most of the work on secure information flow has until recently only been concerned with detecting direct and indirect flows. Secret information can however be leaked to the attacker also through covert channels. It is very reasonable to assume that the attacker, even as an external observer, can monitor the timing (including termination) behaviour of the program. Thus to claim a program secure, the security analysis must take also these into account. In this work we present a surprisingly simple solution to the problem of detecting timing leakages to external observers. Our system consists of a type system in which well-typed programs do not leak secret information directly, indirectly or through timing, and a transformation for removing timing leakages. For any program that is well typed according to Volpano and Smith [VS97a], our transformation generates a program that is also free of timing leaks.", "This paper presents a type system which guarantees that well-typed programs in a procedural programming language satisfy a noninterference security property. With all program inputs and outputs classified at various security levels, the property basically states that a program output, classified at some level, can never change as a result of modifying only inputs classified at higher levels. Intuitively, this means the program does not “leak” sensitive data. The property is similar to a notion introduced years ago by Goguen and Meseguer to model security in multi-level computer systems [7]. We also give an algorithm for inferring and simplifying principal types, which document the security requirements of programs.", "" ] }
1401.8180
1974701307
Some distinguished types of voters, as vetoes, passers or nulls, as well as some others, play a significant role in voting systems because they are either the most powerful or the least powerful voters in the game independently of the measure used to evaluate power. In this paper we are concerned with the design of voting systems with at least one type of these extreme voters and with few types of equivalent voters. With this purpose in mind we enumerate these special classes of games and find out that its number always follows a Fibonacci sequence with smooth polynomial variations. As a consequence we find several families of games with the same asymptotic exponential behavior except for a multiplicative factor which is the golden number or its square. From a more general point of view, our studies are related with the design of voting structures with a predetermined importance ranking.
The number @math of complete simple games with @math voters belonging to exactly two types of voters were recently enumerated in @cite_22 and later on in @cite_27 , giving a simpler proof: where @math are the Fibonacci numbers which constitute a well--known sequence of integer numbers defined by the following recurrence relation: @math , @math , and @math for all @math .
{ "cite_N": [ "@cite_27", "@cite_22" ], "mid": [ "1993559206", "2158578607" ], "abstract": [ "We state an integer linear programming formulation for the unique characterization of complete simple games, i.e. a special subclass of monotone Boolean functions. In order to apply the parametric Barvinok algorithm to obtain enumeration formulas for these discrete objects we provide a tailored decomposition of the integer programming formulation into a finite list of suitably chosen sub-cases. As for the original enumeration problem of Dedekind on Boolean functions we have to introduce some parameters to be able to derive exact formulas for small parameters. Recently, have proven an enumeration formula for complete simple games with two types of voters. We will provide a shorter proof and a new enumeration formula for complete simple games with two minimal winning vectors.", "We investigate voting systems with two classes of voters, for which there is a hierarchy giving each member of the stronger class more influence or important than each member of the weaker class. We deduce for voting systems one important counting fact that allows determining how many of them are for a given number of voters. In fact, the number of these systems follows a Fibonacci sequence with a smooth polynomial variation on the number of voters. On the other hand, we classify by means of some parameters which of these systems are weighted. This result allows us to state an asymptotic conjecture which is opposed to what occurs for symmetric games." ] }
1401.8180
1974701307
Some distinguished types of voters, as vetoes, passers or nulls, as well as some others, play a significant role in voting systems because they are either the most powerful or the least powerful voters in the game independently of the measure used to evaluate power. In this paper we are concerned with the design of voting systems with at least one type of these extreme voters and with few types of equivalent voters. With this purpose in mind we enumerate these special classes of games and find out that its number always follows a Fibonacci sequence with smooth polynomial variations. As a consequence we find several families of games with the same asymptotic exponential behavior except for a multiplicative factor which is the golden number or its square. From a more general point of view, our studies are related with the design of voting structures with a predetermined importance ranking.
The number of complete simple games with one shift-minimal winning coalition, see e.g. @cite_28 @cite_23 , was determined in @cite_4 : @math , where @math denotes the number of complete simple games with @math voters, @math equivalent types of voters, and @math shift-minimal winning coalitions. For complete simple games with two shift-minimal winning coalitions a more complicated enumeration formula was determined in @cite_27 . For given values of the parameters @math and @math it is possible to compute an exact enumeration formula for @math based on the parametric Barvinok algorithm and a tailored decomposition of a certain linear programming formulation for complete simple games, see @cite_27 . We remark that the exact numbers of simple games are known up to @math voters and the exact number of complete simple games or weighted voting games are known up to @math voters.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_4", "@cite_23" ], "mid": [ "2062270858", "1993559206", "15821840", "2010577317" ], "abstract": [ "Abstract Completeness is a necessary condition for a simple game to be representable as a weighted voting system. This paper deals with the class of complete simple games and centers on their structure. Using an extension of Isbell's desirability relation to coalitions, different from the extension normally used, we associate with any complete simple game a lattice of coalition models based upon the types of indifferent players. We establish the basic properties of a vector with natural components and a matrix with non-negative integer entries, both closely related to the lattice, which are also shown to be characteristic invariants of the game, in the sense that they determine it uniquely up to isomorphisms.", "We state an integer linear programming formulation for the unique characterization of complete simple games, i.e. a special subclass of monotone Boolean functions. In order to apply the parametric Barvinok algorithm to obtain enumeration formulas for these discrete objects we provide a tailored decomposition of the integer programming formulation into a finite list of suitably chosen sub-cases. As for the original enumeration problem of Dedekind on Boolean functions we have to introduce some parameters to be able to derive exact formulas for small parameters. Recently, have proven an enumeration formula for complete simple games with two types of voters. We will provide a shorter proof and a new enumeration formula for complete simple games with two minimal winning vectors.", "Some real-world examples of simple games, like the procedure to amend the Canadian Constitution, are complete simple games with minimum. Using characteristic invariants for this class of games, we study different types of solution concepts. For an arbitrary number of players we get the nucleolus by means of a determinate compatible system of equations, characterize the maximality of the kernel and give a method to calculate semivalues. Several applications are found at the end of the paper.", "In this paper we give structural characterizations of disjunctive and conjunctive hierarchical simple games by characterizing them as complete games with a unique shift-maximal losing coalition, and a unique shift-minimal winning coalition, respectively. We prove canonical representation theorems for both types of hierarchical games and establish duality between them. We characterize the disjunctive and conjunctive hierarchical games that are weighted majority games. This paper was inspired by (2008) and Farras and Padro (2010) characterizations of ideal weighted threshold access structures of secret sharing schemes." ] }
1401.8180
1974701307
Some distinguished types of voters, as vetoes, passers or nulls, as well as some others, play a significant role in voting systems because they are either the most powerful or the least powerful voters in the game independently of the measure used to evaluate power. In this paper we are concerned with the design of voting systems with at least one type of these extreme voters and with few types of equivalent voters. With this purpose in mind we enumerate these special classes of games and find out that its number always follows a Fibonacci sequence with smooth polynomial variations. As a consequence we find several families of games with the same asymptotic exponential behavior except for a multiplicative factor which is the golden number or its square. From a more general point of view, our studies are related with the design of voting structures with a predetermined importance ranking.
The structures under study, complete simple games, have interest in several different fields apart from voting although we adopt in this paper the standard voting background. Fields for which these structures are of interest are: circuits, clusters, threshold logic, cryptography, reliability or neural networks among others, see e.g. @cite_25 for an overview. Recently simple games were studied using binary decision diagrams, see e.g. @cite_7 @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_25", "@cite_7" ], "mid": [ "2015247315", "", "2050378181" ], "abstract": [ "A simple game is a pair consisting of a finite set N of players and a set of winning coalitions. (Vector-) weighted majority games ((V) WMG) are a special case of simple games, in which an integer (vector) weight can be assigned to each player and there is a quota which a coalition has to achieve in order to win. Binary decision diagrams (BDDs) are used as compact representations for Boolean functions and sets of subsets. This paper shows, how a quasi-reduced and ordered BDD (QOBDD) of the winning coalitions of a (V) WMG can be build, how one can compute the minimal winning coalitions and how one can easily compute the Banzhaf, Shapley-Shubik, Holler-Packel and Deegan-Packel indices of the players. E.g. in case of weighted majority games it is shown that the Banzhaf and Holler-Packel indices of all players can be computed in expected time and in general, the Banzhaf indices can be computed in time linear in the size of the QOBDD representation of the winning coalitions. Other running times are proven as well. The algorithms were tested on some real world games, e.g. the International Monetary Fund and the EU Treaty of Nice.", "", "Simple games are a powerful tool to analyze decision - making and coalition formation in social and political life. In this paper, we present relation-algebraic models of simple games and develop relational specifications for solving some basic problems of them. In particular, we test certain fundamental properties of simple games and compute specific players and coalitions. We also apply relation algebra to determine power indices. This leads to relation-algebraic specifications, which can be evaluated with the help of the BDD-based tool RelView after a simple translation into the tool's programming language. In order to demonstrate the visualization facilities of RelView, we consider an example of the Catalonian Parliament after the 2003 election." ] }
1401.8180
1974701307
Some distinguished types of voters, as vetoes, passers or nulls, as well as some others, play a significant role in voting systems because they are either the most powerful or the least powerful voters in the game independently of the measure used to evaluate power. In this paper we are concerned with the design of voting systems with at least one type of these extreme voters and with few types of equivalent voters. With this purpose in mind we enumerate these special classes of games and find out that its number always follows a Fibonacci sequence with smooth polynomial variations. As a consequence we find several families of games with the same asymptotic exponential behavior except for a multiplicative factor which is the golden number or its square. From a more general point of view, our studies are related with the design of voting structures with a predetermined importance ranking.
Special types of voters in simple games were also considered in @cite_0 . Complexity results for identifying some of the proposed distinguished types of voters can be found in @cite_24 .
{ "cite_N": [ "@cite_0", "@cite_24" ], "mid": [ "2008042549", "1605139543" ], "abstract": [ "A player, in a proper and monotonic simple game, is dominant if he holds a “strict majority” within a winning coalition. A (non-dictatorial) simple game is dominated if it contains exactly one dominant player. We investigate several possibilities of coalition formation in dominated simple games, under the assumption that the dominant player is given a mandate to form a coalition. The relationship between the various hypotheses on coalition formation in dominated games is investigated in the first seven sections. In the last section we classify real-life data on European parliaments and town councils in Israel.", "Simple coalitional games are a fundamental class of cooperative games and voting games which are used to model coalition formation, resource allocation and decision making in computer science, artificial intelligence and multiagent systems. Although simple coalitional games are well studied in the domain of game theory and social choice, their algorithmic and computational complexity aspects have received less attention till recently. The computational aspects of simple coalitional games are of increased importance as these games are used by computer scientists to model distributed settings. This thesis fits in the wider setting of the interplay between economics and computer science which has led to the development of algorithmic game theory and computational social choice. A unified view of the computational aspects of simple coalitional games is presented here for the first time. Certain complexity results also apply to other coalitional games such as skill games and matching games. The following issues are given special consideration: influence of players, limit and complexity of manipulations in the coalitional games and complexity of resource allocation on networks. The complexity of comparison of influence between players in simple games is characterized. The simple games considered are represented by winning coalitions, minimal winning coalitions, weighted voting games or multiple weighted voting games. A comprehensive classification of weighted voting games which can be solved in polynomial time is presented. An efficient algorithm which uses generating functions and interpolation to compute an integer weight vector for target power indices is proposed. Voting theory, especially the Penrose Square Root Law, is used to investigate the fairness of a real life voting model. Computational complexity of manipulation in social choice protocols can determine whether manipulation is computationally feasible or not. The computational complexity and bounds of manipulation are considered from various angles including control, false-name manipulation and bribery. Moreover, the computational complexity of computing various cooperative game solutions of simple games in dierent representations is studied. Certain structural results regarding least core payos extend to the general monotone cooperative game. The thesis also studies a coalitional game called the spanning connectivity game. It is proved that whereas computing the Banzhaf values and Shapley-Shubik indices of such games is #P-complete, there is a polynomial time combinatorial algorithm to compute the nucleolus. The results have interesting significance for optimal strategies for the wiretapping game which is a noncooperative game defined on a network." ] }
1401.8142
2062349084
We present the Integrated Size and Price Optimization Problem (ISPO) for a fashion discounter with many branches. Based on a two-stage stochastic programming model with recourse, we develop an exact algorithm and a production-compliant heuristic that produces small optimality gaps. In a field study we show that a distribution of supply over branches and sizes based on ISPO solutions is significantly better than a one-stage optimization of the distribution ignoring the possibility of optimal pricing.
Linking of inventory and dynamic pricing decisions has been attacked in @cite_0 @cite_10 @cite_4 @cite_13 . More recent approaches consider robustness considerations @cite_9 or game theoretic aspects, like competition and equilibria @cite_6 . Common to those results is the optimal control approach via fluid approximation and or the dynamic programming approach. The real-world settings of companies usually involves additional side-constraints (in our case: the restriction on the number of used lot-types) and costs (in our case: lot-type handling and opening costs) that would lead to the violation of important assumptions in optimal control and that would require very large state spaces in dynamic programming.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_9", "@cite_6", "@cite_0", "@cite_10" ], "mid": [ "2123265794", "2069855826", "2066631061", "1971964132", "97069210", "1537013696" ], "abstract": [ "We consider a problem of dynamically pricing a single product sold by a monopolist over a short time period. If demand characteristics change throughout the period, it becomes attractive for the company to adjust price continuously to respond to such changes (i.e., price-discriminate intertemporally). However, in practice there is typically a limit on the number of times the price can be adjusted due to the high costs associated with frequent price changes. If that is the case, instead of a continuous pricing rule the company might want to establish a piece-wise constant pricing policy in order to limit the number of price adjustments. Such a pricing policy, which involves optimal choice of prices and timing of price changes, is the focus of this paper. We analyze the pricing problem with a limited number of price changes in a dynamic, deterministic environment in which demand depends on the current price and time, and there is a capacity inventory constraint that may be set optimally ahead of the selling season. The arrival rate can evolve in time arbitrarily, allowing us to model situations in which prices decrease, increase, or neither. We consider several plausible scenarios where pricing and or timing of price changes are endogenized. Various notions of complementarity (single-crossing property, supermodularity and total positivity) are explored to derive structural results: conditions sufficient for the uniqueness of the solution and the monotonicity of prices throughout the sales period. Furthermore, we characterize the impact of the capacity constraint on the optimal prices and the timing of price changes and provide several other comparative statics results. Additional insights are obtained directly from the solutions of various special cases.", "This paper addresses the simultaneous determination of pricing and inventory replenishment strategies in the face of demand uncertainty. More specifically, we analyze the following single item, periodic review model. Demands in consecutive periods are independent, but their distributions depend on the item's price in accordance with general stochastic demand functions. The price charged in any given period can be specified dynamically as a function of the state of the system. A replenishment order may be placed at the beginning of some or all of the periods. Stockouts are fully backlogged. We address both finite and infinite horizon models, with the objective of maximizing total expected discounted profit or its time average value, assuming that prices can either be adjusted arbitrarily (upward or downward) or that they can only be decreased. We characterize the structure of an optimal combined pricing and inventory strategy for all of the above types of models. We also develop an efficient value iteration method to compute these optimal strategies. Finally, we report on an extensive numerical study that characterizes various qualitative properties of the optimal strategies and corresponding optimal profit values.", "In this paper, we present a robust optimization formulation for dealing with demand uncertainty in a dynamic pricing and inventory control problem for a make-to-stock manufacturing system. We consider a multi-product capacitated, dynamic setting. We introduce a demand-based fluid model where the demand is a linear function of the price, the inventory cost is linear, the production cost is an increasing strictly convex function of the production rate and all coefficients are time-dependent. A key part of the model is that no backorders are allowed. We show that the robust formulation is of the same order of complexity as the nominal problem and demonstrate how to adapt the nominal (deterministic) solution algorithm to the robust problem.", "In this paper, we study a make-to-stock manufacturing system where two firms compete through dynamic pricing and inventory control. Our goal is to address competition (in particular a duopoly setting) together with the presence of demand uncertainty. We consider a dynamic setting where multiple products share production capacity. We introduce a demand-based fluid model where the demand is a linear function of the price of the supplier and of her competitor, the inventory and production costs are quadratic, and all coefficients are time dependent. A key part of the model is that no backorders are allowed and the strategy of a supplier depends on her competitor's strategy. First, we reformulate the robust problem as a fluid model of similar form to the deterministic one and show existence of a Nash equilibrium in continuous time. We then discuss issues of uniqueness and address how to compute a particular Nash equilibrium, i.e., the normalized Nash equilibrium.", "A periodical multi-product pricing and inventory control problem with applications to production planning and airline revenue management is studied. The objective function of the single-period model is shown to be convex for certain types of demand distributions, thus tractable for large instances. A heuristic is proposed to solve the more complex multi-period problem, which is an interesting combination of linear and dynamic programming. Numerical experiments and theoretical bounds on the optimal expected revenue suggest that the extent to which a dynamic policy based on a stochastic model will outperform a simple static policy based on a deterministic model depends on the level of demand variability as measured by the coefficient of variation.", "Recent years have seen scores of retail and manufacturing companies exploring innovative pricing strategies in an effort to improve their operations and ultimately the bottom line. Firms are employing such varied tools as dynamic pricing over time, target pricing to different classes of customers, or pricing to learn about customer demand. The benefits can be significant, including not only potential increases in profit, but also improvements such as reduction in demand or production variability, resulting in more efficient supply chains." ] }
1401.8142
2062349084
We present the Integrated Size and Price Optimization Problem (ISPO) for a fashion discounter with many branches. Based on a two-stage stochastic programming model with recourse, we develop an exact algorithm and a production-compliant heuristic that produces small optimality gaps. In a field study we show that a distribution of supply over branches and sizes based on ISPO solutions is significantly better than a one-stage optimization of the distribution ignoring the possibility of optimal pricing.
Dynamic pricing is a well-studied problem in the revenue management literature (see, e.g., @cite_16 @cite_20 @cite_8 @cite_7 @cite_18 as examples). Again, complicated operational side-constraints are usually neglected in favor of a more principle study of isolated aspects. Again, some work has been done from a game theoretic point of view, like strategic customers (see, e.g., @cite_3 ).
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_8", "@cite_3", "@cite_16", "@cite_20" ], "mid": [ "2130822217", "", "2136768128", "2123421883", "2100951624", "" ], "abstract": [ "We consider a dynamic pricing model for selling a given stock of a perishable product over a finite time horizon. Customers, whose reservation price distribution changes over time, arrive according to a nonhomogeneous Poisson process. We show that at any given time, the optimal price decreases with inventory. We also identify a sufficient condition under which the optimal price decreases over time for a given inventory level. This sufficient condition requires that the willingness of a customer to pay a premium for the product does not increase over time. In addition to shedding managerial insight, these structural properties enable efficient computation of the optimal policy.Numerical studies are conducted to show the revenue impact of dynamic price policies. Price changes are set to compensate for statistical fluctuations of demand and to respond to shifts of the reservation price. For the former, our examples show that using optimal dynamic optimal policies achieves 2.4--7.3 revenue improvement over the optimal single price policy. For the latter, the revenue increase can be as high as 100 . These results explain why yield management has become so essential to fashion retailing and travel service industries.", "", "A firm has inventories of a set of components that are used to produce a set of products. There is a finite horizon over which the firm can sell its products. Demand for each product is a stochastic point process with an intensity that is a function of the vector of prices for the products and the time at which these prices are offered. The problem is to price the finished products so as to maximize total expected revenue over the finite sales horizon. An upper bound on the optimal expected revenue is established by analyzing a deterministic version of the problem. The solution to the deterministic problem suggests two heuristics for the stochastic problem that are shown to be asymptotically optimal as the expected sales volume tends to infinity. Several applications of the model to network yield management are given. Numerical examples illustrate both the range of problems that can be modeled under this framework and the effectiveness of the proposed heuristics. The results provide several fundamental insights into the performance of yield management systems.", "We propose a game-theoretical model of a retailer who sells a limited inventory of a product over a finite selling season by using one of two inventory display formats: display all (DA) and display one (DO). Under DA, the retailer displays all available units so that each arriving customer has perfect information about the actual inventory level. Under DO, the retailer displays only one unit at a time so that each customer knows about product availability but not the actual inventory level. Recent research suggests that when faced with strategic consumers, the retailer could increase expected profits by making an upfront commitment to a price path. We focus on such pricing strategies in this paper, and study the potential benefit of DO compared to DA, and its effectiveness in mitigating the adverse impact of strategic consumer behavior. We find support for our hypothesis that the DO format could potentially create an increased sense of shortage risk, and hence it is better than the DA format. However, although potentially beneficial, a move from DA to DO is typically very far from eliminating the adverse impact of strategic consumer behavior. We observe that, generally, it is not important for a retailer to modify the level of inventory when moving from a DA to a DO format; a change in the display format, along with an appropriate price modification, is typically sufficient. Interestingly, across all scenarios in which a change in inventory is significantly beneficial, we observed that only one of the following two actions takes place: either the premium price is increased along with a reduction in inventory, or inventory is increased along with premium price reduction. We find that the marginal benefit of DO can vary dramatically as a function of the per-unit cost to the retailer. In particular, when the retailer's per-unit cost is relatively high, but not too high to make sales unprofitable or to justify exclusive sales to high-valuation customers only, the benefits of DO appear to be at their highest level, and could reach up to 20 increase in profit. Finally, we demonstrate that by moving from DA to DO, while keeping the price path unchanged, the volatility of the retailer's profit decreases.", "In this paper, we examine the research and results of dynamic pricing policies and their relation to revenue management. The survey is based on a generic revenue management problem in which a perishable and nonrenewable set of resources satisfy stochastic price sensitive demand processes over a finite period of time. In this class of problems, the owner (or the seller) of these resources uses them to produce and offer a menu of final products to the end customers. Within this context, we formulate the stochastic control problem of capacity that the seller faces: How to dynamically set the menu and the quantity of products and their corresponding prices to maximize the total revenue over the selling horizon.", "" ] }
1401.7909
2950350902
Twitter has captured the interest of the scientific community not only for its massive user base and content, but also for its openness in sharing its data. Twitter shares a free 1 sample of its tweets through the "Streaming API", a service that returns a sample of tweets according to a set of parameters set by the researcher. Recently, research has pointed to evidence of bias in the data returned through the Streaming API, raising concern in the integrity of this data service for use in research scenarios. While these results are important, the methodologies proposed in previous work rely on the restrictive and expensive Firehose to find the bias in the Streaming API data. In this work we tackle the problem of finding sample bias without the need for "gold standard" Firehose data. Namely, we focus on finding time periods in the Streaming API data where the trend of a hashtag is significantly different from its trend in the true activity on Twitter. We propose a solution that focuses on using an open data source to find bias in the Streaming API. Finally, we assess the utility of the data source in sparse data situations and for users issuing the same query from different regions.
Twitter's Streaming API has been used throughout the domain of social media and network analysis to generate understanding of how users behave on these platforms. It has been used to collect data for topic modeling @cite_4 @cite_8 , network analysis @cite_12 , and statistical analysis of content @cite_0 , among others. Researchers' reliance upon this data source is significant, and these examples only provide a cursory glance at the tip of the iceberg.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_12", "@cite_8" ], "mid": [ "2018165284", "2063904635", "2060009247", "2010273307" ], "abstract": [ "We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario.", "Social networks such as Facebook, LinkedIn, and Twitter have been a crucial source of information for a wide spectrum of users. In Twitter, popular information that is deemed important by the community propagates through the network. Studying the characteristics of content in the messages becomes important for a number of tasks, such as breaking news detection, personalized message recommendation, friends recommendation, sentiment analysis and others. While many researchers wish to use standard text mining tools to understand messages on Twitter, the restricted length of those messages prevents them from being employed to their full potential. We address the problem of using standard topic models in micro-blogging environments by studying how the models can be trained on the dataset. We propose several schemes to train a standard topic model and compare their quality and effectiveness through a set of carefully designed experiments from both qualitative and quantitative perspectives. We show that by training a topic model on aggregated messages we can obtain a higher quality of learned model which results in significantly better performance in two real-world classification problems. We also discuss how the state-of-the-art Author-Topic model fails to model hierarchical relationships between entities in Social Media.", "In this work we developed a surveillance architecture to detect diseases-related postings in social networks using Twitter as an example for a high-traffic social network. Our real-time architecture uses Twitter streaming API to crawl Twitter messages as they are posted. Data mining techniques have been used to index, extract and classify postings. Finally, we evaluate the performance of the classifier with a dataset of public health postings and also evaluate the run-time performance of whole system with respect to latency and throughput.", "Human-generated textual data streams from services such as Twitter increasingly become geo-referenced. The spatial resolution of their coverage improves quickly, making them a promising instrument for sensing various aspects of evolution and dynamics of social systems. This work explores spacetime structures of the topical content of short textual messages in a stream available from Twitter in Ireland. It uses a streaming Latent Dirichlet Allocation topic model trained with an incremental variational Bayes method. The posterior probabilities of the discovered topics are post-processed with a spatial kernel density and subjected to comparative analysis. The identified prevailing topics are often found to be spatially contiguous. We apply Markov-modulated non-homogeneous Poisson processes to quantify a proportion of novelty in the observed abnormal patterns. A combined use of these techniques allows for real-time analysis of the temporal evolution and spatial variability of population's response to various stimuli such as large scale sportive, political or cultural events." ] }
1401.7909
2950350902
Twitter has captured the interest of the scientific community not only for its massive user base and content, but also for its openness in sharing its data. Twitter shares a free 1 sample of its tweets through the "Streaming API", a service that returns a sample of tweets according to a set of parameters set by the researcher. Recently, research has pointed to evidence of bias in the data returned through the Streaming API, raising concern in the integrity of this data service for use in research scenarios. While these results are important, the methodologies proposed in previous work rely on the restrictive and expensive Firehose to find the bias in the Streaming API data. In this work we tackle the problem of finding sample bias without the need for "gold standard" Firehose data. Namely, we focus on finding time periods in the Streaming API data where the trend of a hashtag is significantly different from its trend in the true activity on Twitter. We propose a solution that focuses on using an open data source to find bias in the Streaming API. Finally, we assess the utility of the data source in sparse data situations and for users issuing the same query from different regions.
The bias we focus on in this work is concerned with sample bias from Twitter's APIs. The work performed in @cite_14 compared four commonly-studied facets of the Streaming API and Firehose data, looking for evidence of bias in each facet. They obtained widely different results across facets. First, they studied the statistical differences between the two datasets, using correlation to understand the differences between the top @math hashtags in the two datasets. They find some bias in the occurrence of the top hashtags for low values of @math .
{ "cite_N": [ "@cite_14" ], "mid": [ "1845748792" ], "abstract": [ "Twitter is a social media giant famous for the exchange of short, 140-character messages called \"tweets\". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a \"Streaming API\" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API." ] }
1401.7909
2950350902
Twitter has captured the interest of the scientific community not only for its massive user base and content, but also for its openness in sharing its data. Twitter shares a free 1 sample of its tweets through the "Streaming API", a service that returns a sample of tweets according to a set of parameters set by the researcher. Recently, research has pointed to evidence of bias in the data returned through the Streaming API, raising concern in the integrity of this data service for use in research scenarios. While these results are important, the methodologies proposed in previous work rely on the restrictive and expensive Firehose to find the bias in the Streaming API data. In this work we tackle the problem of finding sample bias without the need for "gold standard" Firehose data. Namely, we focus on finding time periods in the Streaming API data where the trend of a hashtag is significantly different from its trend in the true activity on Twitter. We propose a solution that focuses on using an open data source to find bias in the Streaming API. Finally, we assess the utility of the data source in sparse data situations and for users issuing the same query from different regions.
The authors also compared topical facets of the text by extracting topics with LDA @cite_16 , where they found similar evidence of bias. The authors discover that the topics extracted through LDA are significantly different than those extracted from the gold standard Firehose data. The other facets compared in the data were the networks extracted from the dataset. Here, the authors extracted the User @math User retweet network from both sources and compare centrality measures across the two networks. They find that, on average, the Streaming API is able to find the most central users in the Firehose 50
{ "cite_N": [ "@cite_16" ], "mid": [ "1880262756" ], "abstract": [ "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1401.8038
2952756014
With the rapidly growing demand for the cloud services, a need for efficient methods to trade computing resources increases. Commonly used fixed-price model is not always the best approach for trading cloud resources, because of its inflexible and static nature. Dynamic trading systems, which make use of market mechanisms, show promise for more efficient resource allocation and pricing in the cloud. However, most of the existing mechanisms ignore the seller's costs of providing the resources. In order to address it, we propose a single-sided market mechanism for trading virtual machine instances in the cloud, where the cloud provider can express the reservation prices for traded cloud services. We investigate the theoretical properties of the proposed mechanism and prove that it is truthful, i.e. the buyers do not have an incentive to lie about their true valuation of the resources. We perform extensive experiments in order to investigate the impact of the reserve price on the market outcome. Our experiments show that the proposed mechanism yields near optimal allocations and has a low execution time.
@cite_2 discuss a truthful market mechanism based on the greedy heuristics, which has become a fairly popular approximation allocation mechanism for combinatorial domains. Zaman and Grosu @cite_7 and @cite_17 study combinatorial auction mechanisms for VM allocations in the cloud with the unique resource provider. They propose the greedy mechanisms, and a linear programming relaxation and randomized rounding mechanism. They prove that the greedy mechanisms are truthful and linear programming mechanism is truthful in expectation. Their work solves similar problem to ours, but they assume that there is no resource associated cost (and no resource reservation price), which is hardly realistic. St "o @cite_16 propose a knowledge-based continuous double auction mechanism to determine the prices of the future trades. They design an approximation mechanism based on greedy heuristic and analyze the strategic behavior in the market. Their proposed solution is not suitable for combinatorial requests and does not guarantee truthfulness.
{ "cite_N": [ "@cite_16", "@cite_17", "@cite_7", "@cite_2" ], "mid": [ "2030569003", "2047062435", "2156036430", "" ], "abstract": [ "Grid technologies and the related concepts of utility computing and cloud computing enable the dynamic sourcing of computer resources and services, thus allowing enterprises to cut down on hardware and software expenses and to focus on key competencies and processes. Resources are shared across administrative boundaries, e.g. between enterprises and or business units. In this dynamic and inter-organizational setting, scheduling and pricing become key challenges. Market mechanisms show promise for enhancing resource allocation and pricing in grids. Current mechanisms, however, are not adequately able to handle large-scale settings with strategic users and providers who try to benefit from manipulating the mechanism. In this paper, a market-based heuristic for clearing large-scale grid settings is developed. The proposed heuristic and pricing schemes find an interesting match between scalability and strategic behavior.", "A major challenging problem for cloud providers is designing efficient mechanisms for virtual machine (VM) provisioning and allocation. Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. Recently, cloud providers have introduced auction-based models for VM provisioning and allocation which allow users to submit bids for their requested VMs. We formulate the dynamic VM provisioning and allocation problem for the auction-based model as an integer program considering multiple types of resources. We then design truthful greedy and optimal mechanisms for the problem such that the cloud provider provisions VMs based on the requests of the winning users and determines their payments. We show that the proposed mechanisms are truthful, that is, the users do not have incentives to manipulate the system by lying about their requested bundles of VM instances and their valuations. We perform extensive experiments using real workload traces in order to investigate the performance of the proposed mechanisms. Our proposed mechanisms achieve promising results in terms of revenue for the cloud provider.", "The current cloud computing platforms allocate virtual machine instances to their users through fixed-price allocation mechanisms. We argue that combinatorial auction-based allocation mechanisms are especially efficient over the fixed-price mechanisms since the virtual machine instances are assigned to users having the highest valuation. We formulate the problem of virtual machine allocation in clouds as a combinatorial auction problem and propose two mechanisms to solve it. We perform extensive simulation experiments to compare the two proposed combinatorial auction-based mechanisms with the currently used fixed-price allocation mechanism. Our experiments reveal that the combinatorial auction-based mechanisms can significantly improve the allocation efficiency while generating higher revenue for the cloud providers.", "" ] }
1401.8038
2952756014
With the rapidly growing demand for the cloud services, a need for efficient methods to trade computing resources increases. Commonly used fixed-price model is not always the best approach for trading cloud resources, because of its inflexible and static nature. Dynamic trading systems, which make use of market mechanisms, show promise for more efficient resource allocation and pricing in the cloud. However, most of the existing mechanisms ignore the seller's costs of providing the resources. In order to address it, we propose a single-sided market mechanism for trading virtual machine instances in the cloud, where the cloud provider can express the reservation prices for traded cloud services. We investigate the theoretical properties of the proposed mechanism and prove that it is truthful, i.e. the buyers do not have an incentive to lie about their true valuation of the resources. We perform extensive experiments in order to investigate the impact of the reserve price on the market outcome. Our experiments show that the proposed mechanism yields near optimal allocations and has a low execution time.
@cite_6 consider a periodical combinatorial auction with a single seller. They propose to determine the resource prices based on limited English combinatorial auction model, and to optimally allocate resources in different timeframes based on genetic algorithm. Their proposed mechanism does not guarantee truthfulness. @cite_4 describe a marketplace for trading cloud resources based on e-Bay-like transaction model that supports different services with different level of job priorities. Their marketplace contains an auction system that encourages truthful bidding. Unlike their work, we consider combinatorial market setting with a unique seller without the requirement to prioritize the requests.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2018983065", "2070975014" ], "abstract": [ "This paper presents Cloud Bay, an online resource trading and leasing platform for multi-party resource sharing. Following a market-oriented design principle, Cloud Bay provides an abstraction of a shared virtual resource space across multiple administration domains, and features enhanced functionalities for scalable and automatic resource management and efficient service provisioning. Cloud Bay distinguishes itself from existing research and contributes in a number of aspects. First, it leverages scalable network virtualization and self-configurable virtual appliances to facilitate resource federation and parallel application deployment. Second, Cloud Bay adopts an eBay-style transaction model that supports differentiated services with different levels of job priorities. For cost-sensitive users, Cloud Bay implements an efficient matchmaking algorithm based on auction theory and enables opportunistic resource access through preemptive service scheduling. The proposed Cloud Bay platform stands between HPC service sellers and buyers, and offers a comprehensive solution for resource advertising and stitching, transaction management, and application-to-infrastructure mapping. In this paper, we present the design details of Cloud Bay, and discuss lessons and challenges encountered in the implementation process. The proof-of-concept prototype of Cloud Bay is justified through experiments across multiple sites and simulations.", "On account of the resource characteristics under cloud computing environment and the flexibility and availability of applying economic mechanism to resource allocation, a resource allocation model based on the limited English combinatorial auction under cloud computing environment is advanced. An improved periodical auction model is constructed, and then ribbon capacity is adopted to describe the special storage capacity for special users. Based on the above, the resource trading price between resource buyers and resource providers is determined based on the limited English combinatorial auction model. In the end, the optimal resource allocation solution is pursued based on genetic algorithm. Simulation results have shown that the proposed model is both feasible and effective, which can maximize the seller total trading amounts as well as reduce the executing time of winner determination." ] }
1401.7304
1712219659
We introduce the Destructive Object Handling (DOH) problem, which models aspects of many real-world allocation problems, such as shipping explosive munitions, scheduling processes in a cluster with fragile nodes, re-using passwords across multiple websites, and quarantining patients during a disease outbreak. In these problems, objects must be assigned to handlers, but each object has a probability of destroying itself and all the other objects allocated to the same handler. The goal is to maximize the expected value of the objects handled successfully. We show that finding the optimal allocation is @math - @math , even if all the handlers are identical. We present an FPTAS when the number of handlers is constant. We note in passing that the same technique also yields a first FPTAS for the weapons-target allocation problem manne_wta with a constant number of targets. We study the structure of DOH problems and find that they have a sort of phase transition -- in some instances it is better to spread risk evenly among the handlers, in others, one handler should be used as a sacrificial lamb''. We show that the problem is solvable in polynomial time if the destruction probabilities depend only on the handler to which an object is assigned; if all the handlers are identical and the objects all have the same value; or if each handler can be assigned at most one object. Finally, we empirically evaluate several heuristics based on a combination of greedy and genetic algorithms. The proposed heuristics return fairly high quality solutions to very large problem instances (upto 250 objects and 100 handlers) in tens of seconds.
The DOH problem has many applications in the domain of hazardous material processing and routing @cite_16 . These problems generally deal with the selection of minimum risk processing locations @cite_15 and transportation routes in networks @cite_8 , @cite_5 -- i.e., finding minimum cost facility locations and routes for which human and external material loss in the event of malfunction incidents is minimized.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_16", "@cite_8" ], "mid": [ "2033728116", "2126770502", "2046225382", "2156920570" ], "abstract": [ "Transportation of hazardous materials (hazmats) is a decision problem that has been attracted much attention due to the risk factor involved. A considerable amount of models have been developed that employ single or multiple objective shortest path algorithms minimising the risks for a given origin-destination pair. However in many real life applications (i.e. transportation of gas cylinders), transportation of hazmats calls for the determination of a set of routes used by a fleet of trucks to serve a set of customers, rather than determination of a single optimal route as shortest path algorithms produce. In this paper, we focus on population exposure risk mitigation via production of truck-routes by solving a variant of the Vehicle Routing Problem. For this purpose we employ a single parameter metaheuristic algorithm. A case study of this approach is also demonstrated.", "Undesirable consequences of dangerous goods incidents can be mitigated by quick arrival of specialized response teams at the accident site. We present a novel methodology to determine the optimal design of a specialized team network so as to maximize its ability to respond to such incidents in a region. We show that this problem can be represented via a maximal arc-covering model. We discuss two formulations for the maximal arc-covering problem, a known one and a new one. Through computational experiments, we establish that the known formulation has excessive computational requirements for large-scale problems, whereas the alternative model constitutes a basis for an efficient heuristic. The methodology is applied to assess the emergency response capability to transport incidents, that involve gasoline, in Quebec and Ontario. We point out the possibility of a significant improvement via relocation of the existing specialized teams, which are currently stationed at the shipment origins.", "The transport of hazardous materials is an important strategic and tactical decision problem. Risks associated with this activity make transport planning difficult. Although most existing analytical approaches for hazardous materials transport account for risk, there is no agreement among researchers on how to model the associated risks. This paper provides an overview of the prevailing models, and addresses the question \"Does it matter how we quantify transport risk?\" Our empirical analysis on the U.S. road network suggests that different risk models usually select different \"optimal\" paths for a hazmat shipment between a given origin-destination pair. Furthermore, the optimal path for one model could perform very poorly under another model. This suggests that researchers and practitioners must pay considerable attention to the modeling of risks in hazardous materials transport.", "In this paper, we consider the problem of network design for hazardous material transportation where the government designates a network, and the carriers choose the routes on the network. We model the problem as a bilevel network flow formulation and analyze the bilevel design problem by comparing it to three other decision scenarios. The bilevel model is difficult to solve and may be ill-posed. We propose a heuristic solution method that always finds a stable solution. The heuristic exploits the network flow structure at both levels to overcome the difficulty and instability of the bilevel integer programming model. Testing on real data shows that the linearization of the bilevel model fails to find stable solutions and that the heuristic finds lower risk networks in less time. Further testing on random instances shows that the heuristically designed networks achieve significant risk reduction over single-level models. The risk is very close to the least risk possible. However, this reduction in risk comes with a significant increase in cost. We extend the bilevel model to account for the cost risk trade-off by including cost in the first-level objective. The biobjective-bilevel model is a rich decision-support tool that allows for the generation of many good solutions to the design problem." ] }
1401.7709
2133291805
We tackle the problem of inferring node labels in a partially labeled graph where each node in the graph has multiple label types and each label type has a large number of possible labels. Our primary example, and the focus of this paper, is the joint inference of label types such as hometown, current city, and employers, for users connected by a social network. Standard label propagation fails to consider the properties of the label types and the interactions between them. Our proposed method, called EdgeExplain, explicitly models these, while still enabling scalable inference under a distributed message-passing architecture. On a billion-node subset of the Facebook social network, EdgeExplain significantly outperforms label propagation for several label types, with lifts of up to 120 for recall@1 and 60 for recall@3.
Many graph-based approaches can be viewed as estimating a function over the nodes of the graph, with the function being close to the observed labels, and smooth (similar) at adjacent nodes. Label propagation @cite_24 @cite_0 uses a quadratic function, but other penalties are also possible @cite_23 @cite_21 @cite_8 . Other approaches modify the random walk interpretation of label propagation @cite_13 @cite_31 . In order to handle a large number of distinct label values, the label assignments can be summarized using count-min sketches @cite_2 . None of the approaches consider interactions between multiple label types, and hence fail to capture the edge formation process in graphs considered here.
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_24", "@cite_0", "@cite_23", "@cite_2", "@cite_31", "@cite_13" ], "mid": [ "2401715402", "2126523478", "", "1630959083", "2154455818", "2106829288", "1552297528", "2056021151" ], "abstract": [ "", "We consider the problem of labeling a partially labeled graph. This setting may arise in a number of situations from survey sampling to information retrieval to pattern recognition in manifold settings. It is also, especially, of potential practical importance when data is abundant, but labeling is expensive or requires human assistance. Our approach develops a framework for regularization on such graphs parallel to Tikhonov regularization on continuous spaces. The algorithms are very simple and involve solving a single, usually sparse, system of linear equations. Using the notion of algorithmic stability, we derive bounds on the generalization error and relate it to the structural invariants of the graph.", "", "", "We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data.", "Graph-based Semi-supervised learning (SSL) algorithms have been successfully used in a large number of applications. These methods classify initially unlabeled nodes by propagating label information over the structure of graph starting from seed nodes. Graph-based SSL algorithms usually scale linearly with the number of distinct labels (m), and require O(m) space on each node. Unfortunately, there exist many applications of practical signicance with very large m over large graphs, demanding better space and time complexity. In this paper, we propose MAD-Sketch, a novel graph-based SSL algorithm which compactly stores label distribution on each node using Count-min Sketch, a randomized data structure. We present theoretical analysis showing that under mild conditions, MAD-Sketch can reduce space complexity at each node from O(m) to O(logm), and achieve similar savings in time complexity as well. We support our analysis through experiments on multiple real world datasets. We observe that MAD-Sketch achieves similar performance as existing state-of-the-art graph-based SSL algorithms, while requiring smaller memory footprint and at the same time achieving up to 10x speedup. We nd that MAD-Sketch is able to scale to datasets with one million labels, which is beyond the scope of existing graph-based SSL algorithms.", "We propose a new graph-based label propagation algorithm for transductive learning. Each example is associated with a vertex in an undirected graph and a weighted edge between two vertices represents similarity between the two corresponding example. We build on Adsorption, a recently proposed algorithm and analyze its properties. We then state our learning algorithm as a convex optimization problem over multi-label assignments and derive an efficient algorithm to solve this problem. We state the conditions under which our algorithm is guaranteed to converge. We provide experimental evidence on various real-world datasets demonstrating the effectiveness of our algorithm over other algorithms for such problems. We also show that our algorithm can be extended to incorporate additional prior information, and demonstrate it with classifying data where the labels are not mutually exclusive.", "The rapid growth of the number of videos in YouTube provides enormous potential for users to find content of interest to them. Unfortunately, given the difficulty of searching videos, the size of the video repository also makes the discovery of new content a daunting task. In this paper, we present a novel method based upon the analysis of the entire user-video graph to provide personalized video suggestions for users. The resulting algorithm, termed Adsorption, provides a simple method to efficiently propagate preference information through a variety of graphs. We extensively test the results of the recommendations on a three month snapshot of live data from YouTube." ] }
1401.7709
2133291805
We tackle the problem of inferring node labels in a partially labeled graph where each node in the graph has multiple label types and each label type has a large number of possible labels. Our primary example, and the focus of this paper, is the joint inference of label types such as hometown, current city, and employers, for users connected by a social network. Standard label propagation fails to consider the properties of the label types and the interactions between them. Our proposed method, called EdgeExplain, explicitly models these, while still enabling scalable inference under a distributed message-passing architecture. On a billion-node subset of the Facebook social network, EdgeExplain significantly outperforms label propagation for several label types, with lifts of up to 120 for recall@1 and 60 for recall@3.
Graph structure has been modeled using latent variables @cite_9 @cite_7 @cite_22 , but with an emphasis on link prediction. However, our goal is to make predictions about each individual user, and such latent features can be arbitrary combinations of user attributes, rather than concrete label types we wish to predict. Other models simultaneously explain the connections between documents as well as their word distributions @cite_15 @cite_30 @cite_20 . While we do not consider the problem of modeling text data, our model permits us to incorporate node attributes, such as group memberships. Finally, the number of distinct label values in our application is very large (on the order of millions), and we suspect that the latent variables would have to have a large dimension to explain the edges in our graph well.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_7", "@cite_9", "@cite_15", "@cite_20" ], "mid": [ "2124672527", "", "2158535911", "2066459332", "2165636119", "2053354515" ], "abstract": [ "We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, that is, discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and estimation algorithms based on variational methods that take advantage of sparsity and scale with the number of links. We evaluate the predictive performance of the RTM for large networks of scientific abstracts, web documents, and geographically tagged news.", "", "As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets.", "Network models are widely used to represent relational information among interacting units. In studies of social networks, recent emphasis has been placed on random graph models where the nodes usually represent individual social actors and the edges represent the presence of a specified relation between actors. We develop a class of models where the probability of a relation between actors depends on the positions of individuals in an unobserved “social space.” We make inference for the social space within maximum likelihood and Bayesian frameworks, and propose Markov chain Monte Carlo procedures for making inference on latent positions and the effects of observed covariates. We present analyses of three standard datasets from the social networks literature, and compare the method to an alternative stochastic blockmodeling approach. In addition to improving on model fit for these datasets, our method provides a visual and interpretable model-based spatial representation of social relationships and improv...", "In this work, we address the problem of joint modeling of text and citations in the topic modeling framework. We present two different models called the Pairwise-Link-LDA and the Link-PLSA-LDA models. The Pairwise-Link-LDA model combines the ideas of LDA [4] and Mixed Membership Block Stochastic Models [1] and allows modeling arbitrary link structure. However, the model is computationally expensive, since it involves modeling the presence or absence of a citation (link) between every pair of documents. The second model solves this problem by assuming that the link structure is a bipartite graph. As the name indicates, Link-PLSA-LDA model combines the LDA and PLSA models into a single graphical model. Our experiments on a subset of Citeseer data show that both these models are able to predict unseen data better than the baseline model of Erosheva and Lafferty [8], by capturing the notion of topical similarity between the contents of the cited and citing documents. Our experiments on two different data sets on the link prediction task show that the Link-PLSA-LDA model performs the best on the citation prediction task, while also remaining highly scalable. In addition, we also present some interesting visualizations generated by each of the models.", "Hierarchical taxonomies provide a multi-level view of large document collections, allowing users to rapidly drill down to fine-grained distinctions in topics of interest. We show that automatically induced taxonomies can be made more robust by combining text with relational links. The underlying mechanism is a Bayesian generative model in which a latent hierarchical structure explains the observed data --- thus, finding hierarchical groups of documents with similar word distributions and dense network connections. As a nonparametric Bayesian model, our approach does not require pre-specification of the branching factor at each non-terminal, but finds the appropriate level of detail directly from the data. Unlike many prior latent space models of network structure, the complexity of our approach does not grow quadratically in the number of documents, enabling application to networks with more than ten thousand nodes. Experimental results on hypertext and citation network corpora demonstrate the advantages of our hierarchical, multimodal approach." ] }
1401.7146
1486190496
Measurement shows that 85 of TCP flows in the internet are short-lived flows that stay most of their operation in the TCP startup phase. However, many previous studies indicate that the traditional TCP Slow Start algorithm does not perform well, especially in long fat networks. Two obvious problems are known to impact the Slow Start performance, which are the blind initial setting of the Slow Start threshold and the aggressive increase of the probing rate during the startup phase regardless of the buffer sizes along the path. Current efforts focusing on tuning the Slow Start threshold and or probing rate during the startup phase have not been considered very effective, which has prompted an investigation with a different approach. In this paper, we present a novel TCP startup method, called threshold-less slow start or SSthreshless Start, which does not need the Slow Start threshold to operate. Instead, SSthreshless Start uses the backlog status at bottleneck buffer to adaptively adjust probing rate which allows better seizing of the available bandwidth. Comparing to the traditional and other major modified startup methods, our simulation results show that SSthreshless Start achieves significant performance improvement during the startup phase. Moreover, SSthreshless Start scales well with a wide range of buffer size, propagation delay and network bandwidth. Besides, it shows excellent friendliness when operating simultaneously with the currently popular TCP NewReno connections.
It is illustrated that assistance from routers for TCP rate control is effective to achieve high utilization of network bandwidth @cite_44 . Measured directly at the routers, it offers accurate bandwidth availability utilization, and the role of rate probing algorithm can be significantly reduced. Quick-Start @cite_31 and XCP @cite_40 are some typical examples for this approach. In Quick-Start, a TCP sender advertises a desired sending rate during the three-way handshake to let the network (each hop along the path) approve, reject or reduce the requested sending rate. This way, a sender can quickly tune to an appropriate rate without the time consuming probing procedure. Comparatively, XCP proposes a more fine-grained feedback to TCP senders for them to decide their sending rates. In summary, while the router-assisted approach gives potential to significantly improve the utilization of networks especially during the startup phase of a TCP connection, they require special operations in routers which prevents them from immediate deployment and thus their attractiveness is not high.
{ "cite_N": [ "@cite_44", "@cite_40", "@cite_31" ], "mid": [ "2135087359", "2103701891", "" ], "abstract": [ "Several works have established links between congestion control in communication networks and feedback control theory. In this paper, following this paradigm, the design of an AQM (active queue management) ensuring the stability of the congestion phenomenon at a router is proposed. To this end, a modified fluid flow model of TCP (transmission control protocol) that takes into account all delays of the topology is introduced. Then, appropriate tools from control theory are used to address the stability issue and to cope with the time-varying nature of the multiple delays. More precisely, the design of the AQM is formulated as a structured state feedback for multiple time delay systems through the quadratic separation framework. The objective of this mechanism is to ensure the regulation of the queue size of the congested router as well as flow rates to a prescribed level. Furthermore, the proposed methodology allows to set arbitrarily the QoS (quality of service) of the communications following through the controlled router. Finally, a numerical example and some simulations support the exposed theory.", "Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links.To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.", "" ] }
1401.6399
2951870329
Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the SIMD instructions available in common processors to boost the speed of integer compression schemes. Our S4-BP128-D4 scheme uses as little as 0.7 CPU cycles per decoded integer while still providing state-of-the-art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decoding speed can be wasted. To show that it does not have to be so, we (1) vectorize and optimize the intersection of posting lists; (2) introduce the SIMD Galloping algorithm. We exploit the fact that one SIMD instruction can compare 4 pairs of integers at once. We experiment with two TREC text collections, GOV2 and ClueWeb09 (Category B), using logs from the TREC million-query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state-of-the-art approach.
For an exhaustive review of fast 32-bit integer compression techniques, we refer the reader to Lemire and Boytsov @cite_26 . Their main finding is that schemes compressing integers in large ( @math ) blocks of integers with minimal branching are faster than other approaches, especially when using SIMD instructions. They reported using fewer than 1.5 CPU cycles per 32-bit integer on a 2011-era Intel processor. In comparison, @cite_25 also proposed compression schemes optimized for SIMD instructions on CPUs, but they reported using at least 2.2 CPU cycles per 32-bit integer on a 2010 Intel processor.
{ "cite_N": [ "@cite_26", "@cite_25" ], "mid": [ "1791987072", "1985136582" ], "abstract": [ "In many important applications-such as search engines and relational database systems-data are stored in the form of arrays of integers. Encoding and, most importantly, decoding of these arrays consumes considerable CPUtime. Therefore, substantial effort has been made to reduce costs associated with compression and decompression. In particular, researchers have exploited the superscalar nature of modern processors and single-instruction, multiple-data SIMD instructions. Nevertheless, we introduce a novel vectorized scheme called SIMD-BP128i¾? that improves over previously proposed vectorized approaches. It is nearly twice as fast as the previously fastest schemes on desktop processors varint-G8IU and PFOR. At the same time, SIMD-BP128i¾? saves up to 2bits int. For even better compression, we propose another new vectorized scheme SIMD-FastPFOR that has a compression ratio within 10 of a state-of-the-art scheme Simple-8b while being two times faster during decoding. Copyright © 2013 John Wiley & Sons, Ltd.", "Powerful SIMD instructions in modern processors offer an opportunity for greater search performance. In this paper, we apply these instructions to decoding search engine posting lists. We start by exploring variable-length integer encoding formats used to represent postings. We define two properties, byte-oriented and byte-preserving, that characterize many formats of interest. Based on their common structure, we define a taxonomy that classifies encodings along three dimensions, representing the way in which data bits are stored and additional bits are used to describe the data. Using this taxonomy, we discover new encoding formats, some of which are particularly amenable to SIMD-based decoding. We present generic SIMD algorithms for decoding these formats. We also extend these algorithms to the most common traditional encoding format. Our experiments demonstrate that SIMD-based decoding algorithms are up to 3 times faster than non-SIMD algorithms." ] }
1401.6399
2951870329
Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the SIMD instructions available in common processors to boost the speed of integer compression schemes. Our S4-BP128-D4 scheme uses as little as 0.7 CPU cycles per decoded integer while still providing state-of-the-art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decoding speed can be wasted. To show that it does not have to be so, we (1) vectorize and optimize the intersection of posting lists; (2) introduce the SIMD Galloping algorithm. We exploit the fact that one SIMD instruction can compare 4 pairs of integers at once. We experiment with two TREC text collections, GOV2 and ClueWeb09 (Category B), using logs from the TREC million-query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state-of-the-art approach.
@cite_23 also carried out an extensive experimental evaluation. On synthetic data using a uniform distribution, they found that Baeza-Yates' algorithm @cite_3 was faster than SvS with galloping (by about 30 ). However, on real data (e.g., TREC GOV2), SvS with galloping was superior to most alternatives by a wide margin (e.g., @math faster).
{ "cite_N": [ "@cite_3", "@cite_23" ], "mid": [ "1556741196", "2094154930" ], "abstract": [ "This paper introduces a simple intersection algorithm for two sorted sequences that is fast on average. It is related to the multiple searching problem and to merging. We present the worst and average case analysis, showing that in the former, the complexity nicely adapts to the smallest list size. In the later case, it performs less comparisons than the total number of elements on both inputs when n = αm (α > 1). Finally, we show its application to fast query processing in Web search engines, where large intersections, or differences, must be performed fast.", "The intersection of large ordered sets is a common problem in the context of the evaluation of boolean queries to a search engine. In this article, we propose several improved algorithms for computing the intersection of sorted arrays, and in particular for searching sorted arrays in the intersection context. We perform an experimental comparison with the algorithms from the previous studies from Demaine, Lopez-Ortiz, and Munro [ALENEX 2001] and from Baeza-Yates and Salinger [SPIRE 2005]; in addition, we implement and test the intersection algorithm from Barbay and Kenyon [SODA 2002] and its randomized variant [SAGA 2003]. We consider both the random data set from Baeza-Yates and Salinger, the Google queries used by , a corpus provided by Google, and a larger corpus from the TREC Terabyte 2006 efficiency query stream, along with its own query log. We measure the performance both in terms of the number of comparisons and searches performed, and in terms of the CPU time on two different architectures. Our results confirm or improve the results from both previous studies in their respective context (comparison model on real data, and CPU measures on random data) and extend them to new contexts. In particular, we show that value-based search algorithms perform well in posting lists in terms of the number of comparisons performed." ] }
1401.6399
2951870329
Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the SIMD instructions available in common processors to boost the speed of integer compression schemes. Our S4-BP128-D4 scheme uses as little as 0.7 CPU cycles per decoded integer while still providing state-of-the-art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decoding speed can be wasted. To show that it does not have to be so, we (1) vectorize and optimize the intersection of posting lists; (2) introduce the SIMD Galloping algorithm. We exploit the fact that one SIMD instruction can compare 4 pairs of integers at once. We experiment with two TREC text collections, GOV2 and ClueWeb09 (Category B), using logs from the TREC million-query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state-of-the-art approach.
Culpepper and Moffat similarly found that SvS with galloping was the fastest @cite_16 though their own max algorithm was fast as well. They found that in some specific instances (for queries containing 9 or more terms) a technique similar to galloping (interpolative search) was slightly better (by less than 10 ).
{ "cite_N": [ "@cite_16" ], "mid": [ "1984614894" ], "abstract": [ "Conjunctive Boolean queries are a key component of modern information retrieval systems, especially when Web-scale repositories are being searched. A conjunctive query q is equivalent to a vqv-way intersection over ordered sets of integers, where each set represents the documents containing one of the terms, and each integer in each set is an ordinal document identifier. As is the case with many computing applications, there is tension between the way in which the data is represented, and the ways in which it is to be manipulated. In particular, the sets representing index data for typical document collections are highly compressible, but are processed using random access techniques, meaning that methods for carrying out set intersections must be alert to issues to do with access patterns and data representation. Our purpose in this article is to explore these trade-offs, by investigating intersection techniques that make use of both uncompressed “integer” representations, as well as compressed arrangements. We also propose a simple hybrid method that provides both compact storage, and also faster intersection computations for conjunctive querying than is possible even with uncompressed representations." ] }
1401.6399
2951870329
Sorted lists of integers are commonly used in inverted indexes and database systems. They are often compressed in memory. We can use the SIMD instructions available in common processors to boost the speed of integer compression schemes. Our S4-BP128-D4 scheme uses as little as 0.7 CPU cycles per decoded integer while still providing state-of-the-art compression. However, if the subsequent processing of the integers is slow, the effort spent on optimizing decoding speed can be wasted. To show that it does not have to be so, we (1) vectorize and optimize the intersection of posting lists; (2) introduce the SIMD Galloping algorithm. We exploit the fact that one SIMD instruction can compare 4 pairs of integers at once. We experiment with two TREC text collections, GOV2 and ClueWeb09 (Category B), using logs from the TREC million-query track. We show that using only the SIMD instructions ubiquitous in all modern CPUs, our techniques for conjunctive queries can double the speed of a state-of-the-art approach.
Kane and Tompa improved Culpepper and Moffat's by adding auxiliary data structures to skip over large blocks of compressed values (256 integers) during the computation of the intersection @cite_20 . Their good results are in contrast with Culpepper and Moffat's finding that skipping is counterproductive when using bitmaps @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_20" ], "mid": [ "1984614894", "2084965869" ], "abstract": [ "Conjunctive Boolean queries are a key component of modern information retrieval systems, especially when Web-scale repositories are being searched. A conjunctive query q is equivalent to a vqv-way intersection over ordered sets of integers, where each set represents the documents containing one of the terms, and each integer in each set is an ordinal document identifier. As is the case with many computing applications, there is tension between the way in which the data is represented, and the ways in which it is to be manipulated. In particular, the sets representing index data for typical document collections are highly compressible, but are processed using random access techniques, meaning that methods for carrying out set intersections must be alert to issues to do with access patterns and data representation. Our purpose in this article is to explore these trade-offs, by investigating intersection techniques that make use of both uncompressed “integer” representations, as well as compressed arrangements. We also propose a simple hybrid method that provides both compact storage, and also faster intersection computations for conjunctive querying than is possible even with uncompressed representations.", "This paper examines the space-time performance of in-memory conjunctive list intersection algorithms, as used in search engines, where integers represent document identifiers. We demonstrate that the combination of bitvectors, large skips, delta compressed lists and URL ordering produces superior results to using skips or bitvectors alone. We define semi-bitvectors, a new partial bitvector data structure that stores the front of the list using a bitvector and the remainder using skips and delta compression. To make it particularly effective, we propose that documents be ordered so as to skew the postings lists to have dense regions at the front. This can be accomplished by grouping documents by their size in a descending manner and then reordering within each group using URL ordering. In each list, the division point between bitvector and delta compression can occur at any group boundary. We explore the performance of semi-bitvectors using the GOV2 dataset for various numbers of groups, resulting in significant space-time improvements over existing approaches. Semi-bitvectors do not directly support ranking. Indeed, bitvectors are not believed to be useful for ranking based search systems, because frequencies and offsets cannot be included in their structure. To refute this belief, we propose several approaches to improve the performance of ranking-based search systems using bitvectors, and leave their verification for future work. These proposals suggest that bitvectors, and more particularly semi-bitvectors, warrant closer examination by the research community." ] }