aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1404.2266
|
2003170148
|
The performance of cluster computing depends on how concurrent jobs share multiple data center resource types like CPU, RAM and disk storage. Recent research has discussed efficiency and fairness requirements and identified a number of desirable scheduling objectives including so-called dominant resource fairness (DRF). We argue here that proportional fairness (PF), long recognized as a desirable objective in sharing network bandwidth between ongoing flows, is preferable to DRF. The superiority of PF is manifest under the realistic modelling assumption that the population of jobs in progress is a stochastic process. In random traffic the strategy-proof property of DRF proves unimportant while PF is shown by analysis and simulation to offer a significantly better efficiency-fairness tradeoff.
|
A number of authors have extended the DRF concept introduced by Ghodsi al @cite_4 . Parkes al @cite_7 generalize DRF notably to account for per-job sharing weights. Psomas and Schwartz @cite_8 take account of the fact that task resources must fit into a single machine, bringing bin packing considerations to the allocation problem. Wang al @cite_3 further generalize this approach by accounting for heterogeneous server capacity limits. Bhattacharya al @cite_14 add the notion of hierarchical scheduling to DRF: cluster resource allocations account also for shares attributed to the departments that own particular jobs. It is noteworthy that DRF is now implemented in the Hadoop Next Generation Fair Scheduler http: hadoop.apache.org docs r2.3.0 hadoop-yarn hadoop-yarn-site FairScheduler.html . Lastly, we note the proposal to use DRF in a different application setting, namely sharing middlebox resources in software routers @cite_0 @cite_18 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_0"
],
"mid": [
"",
"2110560130",
"1890643295",
"2155053108",
"",
"2032723264",
"2113551315"
],
"abstract": [
"",
"There has been a recent industrial effort to develop multi-resource hierarchical schedulers. However, the existing implementations have some shortcomings in that they might leave resources unallocated or starve certain jobs. This is because the multi-resource setting introduces new challenges for hierarchical scheduling policies. We provide an algorithm, which we implement in Hadoop, that generalizes the most commonly used multi-resource scheduler, DRF [1], to support hierarchies. Our evaluation shows that our proposed algorithm, H-DRF, avoids the starvation and resource inefficiencies of the existing open-source schedulers and outperforms slot scheduling.",
"We consider the problem of fair resource allocation in a system containing different resource types, where each user may have different demands for each resource. To address this problem, we propose Dominant Resource Fairness (DRF), a generalization of max-min fairness to multiple resource types. We show that DRF, unlike other possible policies, satisfies several highly desirable properties. First, DRF incentivizes users to share resources, by ensuring that no user is better off if resources are equally partitioned among them. Second, DRF is strategy-proof, as a user cannot increase her allocation by lying about her requirements. Third, DRF is envy-free, as no user would want to trade her allocation with that of another user. Finally, DRF allocations are Pareto efficient, as it is not possible to improve the allocation of a user without decreasing the allocation of another user. We have implemented DRF in the Mesos cluster resource manager, and show that it leads to better throughput and fairness than the slot-based fair sharing schemes in current cluster schedulers.",
"We study the problem of allocating multiple resources to agents with heterogeneous demands. Technological advances such as cloud computing and data centers provide a new impetus for investigating this problem under the assumption that agents demand the resources in fixed proportions, known in economics as Leontief preferences. In a recent paper, [2011] introduced the dominant resource fairness (DRF) mechanism, which was shown to possess highly desirable theoretical properties under Leontief preferences. We extend their results in three directions. First, we show that DRF generalizes to more expressive settings, and leverage a new technical framework to formally extend its guarantees. Second, we study the relation between social welfare and properties such as truthfulness; DRF performs poorly in terms of social welfare, but we show that this is an unavoidable shortcoming that is shared by every mechanism that satisfies one of three basic properties. Third, and most importantly, we study a realistic setting that involves indivisibilities. We chart the boundaries of the possible in this setting, contributing a new relaxed notion of fairness and providing both possibility and impossibility results.",
"",
"We study the multi-resource allocation problem in cloud computing systems where the resource pool is constructed from a large number of heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, and storage. We design a multi-resource allocation mechanism, called DRFH, that generalizes the notion of Dominant Resource Fairness (DRF) from a single server to multiple heterogeneous servers. DRFH provides a number of highly desirable properties. With DRFH, no user prefers the allocation of another user; no one can improve its allocation without decreasing that of the others; and more importantly, no user has an incentive to lie about its resource demand. As a direct application, we design a simple heuristic that implements DRFH in real-world systems. Large-scale simulations driven by Google cluster traces show that DRFH significantly outperforms the traditional slot-based scheduler, leading to much higher resource utilization with substantially shorter job completion times.",
"Middleboxes are ubiquitous in today's networks and perform a variety of important functions, including IDS, VPN, firewalling, and WAN optimization. These functions differ vastly in their requirements for hardware resources (e.g., CPU cycles and memory bandwidth). Thus, depending on the functions they go through, different flows can consume different amounts of a middlebox's resources. While there is much literature on weighted fair sharing of link bandwidth to isolate flows, it is unclear how to schedule multiple resources in a middlebox to achieve similar guarantees. In this paper, we analyze several natural packet scheduling algorithms for multiple resources and show that they have undesirable properties. We propose a new algorithm, Dominant Resource Fair Queuing (DRFQ), that retains the attractive properties that fair sharing provides for one resource. In doing so, we generalize the concept of virtual time in classical fair queuing to multi-resource settings. The resulting algorithm is also applicable in other contexts where several resources need to be multiplexed in the time domain."
]
}
|
1404.2266
|
2003170148
|
The performance of cluster computing depends on how concurrent jobs share multiple data center resource types like CPU, RAM and disk storage. Recent research has discussed efficiency and fairness requirements and identified a number of desirable scheduling objectives including so-called dominant resource fairness (DRF). We argue here that proportional fairness (PF), long recognized as a desirable objective in sharing network bandwidth between ongoing flows, is preferable to DRF. The superiority of PF is manifest under the realistic modelling assumption that the population of jobs in progress is a stochastic process. In random traffic the strategy-proof property of DRF proves unimportant while PF is shown by analysis and simulation to offer a significantly better efficiency-fairness tradeoff.
|
There is an abundant literature on sharing network bandwidth. PF and, more generally, the idea that bandwidth sharing between a fixed set of flows should maximize a sum of per-flow utilities were notions introduced by Kelly al in 1998 @cite_11 . In that work, and subsequent contributions by many authors on network utility maximization, the population of flows in progress is assumed fixed. We believe this to be inappropriate, like the assumption that the population of jobs is fixed when evaluating cluster sharing. It does not make much sense to measure the utility of a flow as a function of its rate (e.g., the log function for PF) on recognizing that the population of flows in progress, and consequently the rate of any ongoing flow, changes rapidly and significantly throughout the lifetime of that flow.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2159715570"
],
"abstract": [
"This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing."
]
}
|
1404.2266
|
2003170148
|
The performance of cluster computing depends on how concurrent jobs share multiple data center resource types like CPU, RAM and disk storage. Recent research has discussed efficiency and fairness requirements and identified a number of desirable scheduling objectives including so-called dominant resource fairness (DRF). We argue here that proportional fairness (PF), long recognized as a desirable objective in sharing network bandwidth between ongoing flows, is preferable to DRF. The superiority of PF is manifest under the realistic modelling assumption that the population of jobs in progress is a stochastic process. In random traffic the strategy-proof property of DRF proves unimportant while PF is shown by analysis and simulation to offer a significantly better efficiency-fairness tradeoff.
|
The study of bandwidth sharing under a dynamic model of traffic also began at the end of the 90s @cite_21 . The most significant results derived over an ensuing period of several years are summarized in the paper by Bonald al @cite_10 . It turns out that PF brings significantly better flow completion time performance than max-min fairness for a network with a mix of wired and wireless links (e.g., see Fig. 17.14 in @cite_12 ). Flows in such networks require unequal resource shares, like jobs in the compute cluster, so that these bandwidth sharing results anticipate the excellent performance of PF demonstrated here.
|
{
"cite_N": [
"@cite_21",
"@cite_10",
"@cite_12"
],
"mid": [
"1529132671",
"2084765981",
"83369850"
],
"abstract": [
"We consider the performance of a network like the Internet handling so?called elastic traffic where the rate of flows adjusts to fill available bandwidth. Realized throughput depends both on the way bandwidth is shared and on the random nature of traffic. We assume traffic consists of point to point transfers of individual documents of finite size arriving according to a Poisson process. Notable results are that weighted sharing has limited impact on perceived quality of service and that discrimination in favour of short documents leads to considerably better performance than fair sharing. In a linear network, max---min fairness is preferable to proportional fairness under random traffic while the converse is true under the assumption of a static configuration of persistent flows. Admission control is advocated as a necessary means to maintain goodput in case of traffic overload.",
"We compare the performance of three usual allocations, namely max-min fairness, proportional fairness and balanced fairness, in a communication network whose resources are shared by a random number of data flows. The model consists of a network of processor-sharing queues. The vector of service rates, which is constrained by some compact, convex capacity set representing the network resources, is a function of the number of customers in each queue. This function determines the way network resources are allocated. We show that this model is representative of a rich class of wired and wireless networks. We give in this general framework the stability condition of max-min fairness, proportional fairness and balanced fairness and compare their performance on a number of toy networks.",
"In packet-switched networks, resources are typically shared by a dynamic set of data flows. This dynamic resource sharing can be represented by a queueing network with state-dependent service rates. For a specific resource allocation we refer to as balanced fairness the corresponding queueing network is a Whittle network and has an explicit stationary distribution. We give some key properties satisfied by balanced fairness and compare the resulting throughput performance to those obtained under the max-min fair and proportional fair allocations."
]
}
|
1404.2348
|
2951508632
|
In secondary spectrum trading markets, auctions are widely used by spectrum holders (SHs) to redistribute their unused channels to secondary wireless service providers (WSPs). As sellers, the SHs design proper auction schemes to stimulate more participants and maximize the revenue from the auction. As buyers, the WSPs determine the bidding strategies in the auction to better serve their end users. In this paper, we consider a three-layered spectrum trading market consisting of the SH, the WSPs and the end users. We jointly study the strategies of the three parties. The SH determines the auction scheme and spectrum supplies to optimize its revenue. The WSPs have flexible bidding strategies in terms of both demands and valuations considering the strategies of the end users. We design FlexAuc, a novel auction mechanism for this market to enable dynamic supplies and demands in the auction. We prove theoretically that FlexAuc not only maximizes the social welfare but also preserves other nice properties such as truthfulness and computational tractability.
|
Most of the state-of-the-art mechanisms assume that buyers can claim at most one channel. These mechanisms cannot satisfy the flexible demands from buyers. Single-seller multi-buyer auction with homogeneous channels has been studied extensively. VERITAS in @cite_13 allowed users to buy channels based on their demands and spectrum owner to maximize revenue with spectrum reuse. In @cite_3 the authors proposed a VCG auction to maximize the expected revenue of the seller and a suboptimal auction to reduce the complexity for practical purpose. Double auction mechanisms are studied for the multi-seller multi-buyer case. McAfee mechanism @cite_0 was proposed for trading homogeneous items in double auction. Many follow-up works has been done since then @cite_14 @cite_18 @cite_16 @cite_10 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_10",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_13"
],
"mid": [
"2021314496",
"2129791350",
"2100999505",
"2087734489",
"2138848780",
"2023260531",
"2171090088"
],
"abstract": [
"With the growing deployment of wireless communication technologies, radio spectrum is becoming a scarce resource. Thus, mechanisms to efficiently allocate the available spectrum are of interest. In this paper, we model the radio spectrum allocation problem as a sealed-bid reserve auction, and propose SMALL, which is a Strategy-proof Mechanism for radio spectrum ALLocation. Furthermore, we extend SMALL to adapt to multiradio spectrum buyers, which can bid for more than one radio. We evaluate SMALL with simulations. Simulation results show that SMALL has good performance in median to large scale spectrum auctions.",
"We design truthful double spectrum auctions where multiple parties can trade spectrum based on their individual needs. Open, market-based spectrum trading motivates existing spectrum owners (as sellers) to lease their selected idle spectrum to new spectrum users, and provides new users (as buyers) the spectrum they desperately need. The most significant challenge is how to make the auction economic-robust (truthful in particular) while enabling spectrum reuse to improve spectrum utilization. Unfortunately, existing designs either do not consider spectrum reuse or become untruthful when applied to double spectrum auctions. We address this challenge by proposing TRUST, a general framework for truthful double spectrum auctions. TRUST takes as input any reusability-driven spectrum allocation algorithm, and applies a novel winner determination and pricing mechanism to achieve truthfulness and other economic properties while significantly improving spectrum utilization. To our best knowledge, TRUST is the first solution for truthful double spectrum auctions that enable spectrum reuse. Our results show that economic factors introduce a tradeoff between spectrum efficiency and economic robustness. TRUST makes an important contribution on enabling spectrum reuse to minimize such tradeoff.",
"Auction is widely applied in wireless communication for spectrum allocation. Most of prior works have assumed that all spectrums are identical. In reality, however, spectrums provided by different owners have distinctive characteristics in both spacial and frequency domains. Spectrum availability also varies in different geo-locations. Furthermore, frequency diversity may cause non-identical conflict relationships among spectrum buyers since different frequencies have distinct communication ranges. Under such a scenario, existing spectrum auction schemes cannot provide truthfulness or efficiency. In this paper, we propose a Truthful double Auction mechanism for HEterogeneous Spectrum, called TAHES, which allows buyers to explicitly express their personalized preferences for heterogeneous spectrums and also addresses the problem of interference graph variation. We prove that TAHES has nice economic properties including truthfulness, individual rationality and budget balance. Results from extensive simulation studies demonstrate the truthfulness, effectiveness and efficiency of TAHES.",
"Spectrum is a critical yet scarce resource and it has been shown that dynamic spectrum access can significantly improve spectrum utilization. To achieve this, it is important to incentivize the primary license holders to open up their under-utilized spectrum for sharing. In this paper we present a secondary spectrum market where a primary license holder can sell access to its unused or under-used spectrum resources in the form of certain fine-grained spectrum-space-time unit. Secondary wireless service providers can purchase such contracts to deploy new service, enhance their existing service, or deploy ad hoc service to meet flash crowds demand. Within the context of this market, we investigate how to use auction mechanisms to allocate and price spectrum resources so that the primary license holder's revenue is maximized. We begin by classifying a number of alternative auction formats in terms of spectrum demand. We then study a specific auction format where secondary wireless service providers have demands for fixed locations (cells). We propose an optimal auction based on the concept of virtual valuation. Assuming the knowledge of valuation distributions, the optimal auction uses the Vickrey-Clarke-Groves (VCG) mechanism to maximize the expected revenue while enforcing truthfulness. To reduce the computational complexity, we further design a truthful suboptimal auction with polynomial time complexity. It uses a monotone allocation and critical value payment to enforce truthfulness. Simulation results show that this suboptimal auction can generate stable expected revenue.",
"A double auction mechanism that provides dominant strategies for both buyers and sellers is analyzed. This mechanism satisfies the 1n convergence to efficiency of the buyer's bid double auction. In addition, the mechanism always produces full information first best prices; the inefficiency arises because the least valuable profitable trade may be prohibited by the mechanism. The mechanism has an oral implementation utilizing bid and asked prices.",
"On one hand, cooperative communication has been gaining more and more popularity since it has great potential to increase the capacity of wireless networks. On the other hand, the applications of cooperative communication technology are rarely seen in reality, even in some scenarios where the demands for bandwidth-hungry applications have pushed the system designers to develop innovative network solutions. A main obstacle lying between the potential capability of channel capacity improvement and the wide adoption of cooperative communication is the lack of incentives for the participating wireless nodes to serve as relay nodes. Hence, in this paper, we design TASC, an auction scheme for the cooperative communications, where wireless node can trade relay services. TASC makes an important contribution of maintaining truthfulness while fulfilling other design objectives. We show analytically that TASC is truthful and has polynomial time complexity. Extensive experiments show that TASC can achieve multiple economic properties without significant performance degradation compared with pure relay assignment algorithms.",
"Market-driven dynamic spectrum auctions can drastically improve the spectrum availability for wireless networks struggling to obtain additional spectrum. However, they face significant challenges due to the fear of market manipulation. A truthful or strategy-proof spectrum auction eliminates the fear by enforcing players to bid their true valuations of the spectrum. Hence bidders can avoid the expensive overhead of strategizing over others and the auctioneer can maximize its revenue by assigning spectrum to bidders who value it the most. Conventional truthful designs, however, either fail or become computationally intractable when applied to spectrum auctions. In this paper, we propose VERITAS, a truthful and computationally-efficient spectrum auction to support an eBay-like dynamic spectrum market. VERITAS makes an important contribution of maintaining truthfulness while maximizing spectrum utilization. We show analytically that VERITAS is truthful, efficient, and has a polynomial complexity of O(n3k) when n bidders compete for k spectrum bands. Simulation results show that VERITAS outperforms the extensions of conventional truthful designs by up to 200 in spectrum utilization. Finally, VERITAS supports diverse bidding formats and enables the auctioneer to reconfigure allocations for multiple market objectives."
]
}
|
1404.1675
|
1989741244
|
In this paper, we investigate the joint optimal sensing and distributed Medium Access Control (MAC) protocol design problem for cognitive radio (CR) networks. We consider both scenarios with single and multiple channels. For each scenario, we design a synchronized MAC protocol for dynamic spectrum sharing among multiple secondary users (SUs), which incorporates spectrum sensing for protecting active primary users (PUs). We perform saturation throughput analysis for the corresponding proposed MAC protocols that explicitly capture the spectrum-sensing performance. Then, we find their optimal configuration by formulating throughput maximization problems subject to detection probability constraints for PUs. In particular, the optimal solution of the optimization problem returns the required sensing time for PUs' protection and optimal contention window to maximize the total throughput of the secondary network. Finally, numerical results are presented to illustrate developed theoretical findings in this paper and significant performance gains of the optimal sensing and protocol configuration.
|
Various research problems and solution approaches have been considered for a dynamic spectrum sharing problem in the literature. In @cite_12 , @cite_17 , a dynamic power allocation problem for cognitive radio networks was investigated considering fairness among secondary users and interference constraints for primary users. When only mean channel gains averaged over short term fading can be estimated, the authors proposed more relaxed protection constraints in terms of interference violation probabilities for the underlying fair power allocation problem. In @cite_19 , information theory limits of cognitive radio channels were derived. Game theoretic approach for dynamic spectrum sharing was considered in @cite_1 , @cite_11 .
|
{
"cite_N": [
"@cite_1",
"@cite_17",
"@cite_19",
"@cite_12",
"@cite_11"
],
"mid": [
"2111744010",
"2108789935",
"2102617152",
"2152484151",
"2143535483"
],
"abstract": [
"Cognitive radios (CRs) have a great potential to improve spectrum utilization by enabling users to access the spectrum dynamically without disturbing licensed primary radios (PRs). A key challenge in operating these radios as a network is how to implement an efficient medium access control (MAC) mechanism that can adaptively and efficiently allocate transmission powers and spectrum among CRs according to the surrounding environment. Most existing works address this issue via suboptimal heuristic approaches or centralized solutions. In this paper, we propose a novel joint power channel allocation scheme that improves the performance through a distributed pricing approach. In this scheme, the spectrum allocation problem is modeled as a noncooperative game, with each CR pair acting as a player. A price-based iterative water-filling (PIWF) algorithm is proposed, which enables CR users to reach a good Nash equilibrium (NE). This PIWF algorithm can be implemented distributively with CRs repeatedly negotiating their best transmission powers and spectrum. Simulation results show that the social optimality of the NE solution is dramatically improved through pricing. Depending on the different orders according to which CRs take actions, we study sequential and parallel versions of the PIWF algorithm. We show that the parallel version converges faster than the sequential version. We then propose a corresponding MAC protocol to implement our resource management schemes. The proposed MAC allows multiple CR pairs to be first involved in an admission phase, then iteratively negotiate their transmission powers and spectrum via control-packet exchanges. Following the negotiation phase, CRs proceed concurrently with their data transmissions. Simulations are used to study the performance of our protocol and demonstrate its effectiveness in terms of improving the overall network throughput and reducing the average power consumption.",
"A resource allocation framework is presented for spectrum underlay in cognitive wireless networks. We consider both interference constraints for primary users and quality of service (QoS) constraints for secondary users. Specifically, interference from secondary users to primary users is constrained to be below a tolerable limit. Also, signal to interference plus noise ratio (SINR) of each secondary user is maintained higher than a desired level for QoS insurance. We propose admission control algorithms to be used during high network load conditions which are performed jointly with power control so that QoS requirements of all admitted secondary users are satisfied while keeping the interference to primary users below the tolerable limit. If all secondary users can be supported at minimum rates, we allow them to increase their transmission rates and share the spectrum in a fair manner. We formulate the joint power rate allocation with proportional and max-min fairness criteria as optimization problems. We show how to transform these optimization problems into a convex form so that their globally optimal solutions can be obtained. Numerical results show that the proposed admission control algorithms achieve performance very close to that of the optimal solution. Also, impacts of different system and QoS parameters on the network performance are investigated for the admission control, and rate power allocation algorithms under different fairness criteria.",
"Cognitive radio promises a low-cost, highly flexible alternative to the classic single-frequency band, single-protocol wireless device. By sensing and adapting to its environment, such a device is able to fill voids in the wireless spectrum and can dramatically increase spectral efficiency. In this paper, the cognitive radio channel is defined as a two-sender, two-receiver interference channel in which sender 2 obtains the encoded message sender 1 plans to transmit. We consider two cases: in the genie-aided cognitive radio channel, sender 2 is noncausally presented the data to be transmitted by sender 1 while in the causal cognitive radio channel, the data is obtained causally. The cognitive radio at sender 2 may then choose to transmit simultaneously over the same channel, as opposed to waiting for an idle channel as is traditional for a cognitive radio. Our main result is the development of an achievable region which combines Gel'fand-Pinkser coding with an achievable region construction for the interference channel. In the additive Gaussian noise case, this resembles dirty-paper coding, a technique used in the computation of the capacity of the Gaussian multiple-input multiple-output (MIMO) broadcast channel. Numerical evaluation of the region in the Gaussian noise case is performed, and compared to an inner bound, the interference channel, and an outer bound, a modified Gaussian MIMO broadcast channel. Results are also extended to the case in which the message is causally obtained.",
"We investigate the dynamic spectrum sharing problem among primary and secondary users in a cognitive radio network. We consider the scenario where primary users exhibit on-off behavior and secondary users are able to dynamically measure estimate sum interference from primary users at their receiving ends. For such a scenario, we solve the problem of fair spectrum sharing among secondary users subject to their QoS constraints (in terms of minimum SINR and transmission rate) and interference constraints for primary users. Since tracking channel gains instantaneously for dynamic spectrum allocation may be very difficult in practice, we consider the case where only mean channel gains averaged over short-term fading are available. Under such scenarios, we derive outage probabilities for secondary users and interference constraint violation probabilities for primary users. Based on the analysis, we develop a complete framework to perform joint admission control and rate power allocation for secondary users such that both QoS and interference constraints are only violated within desired limits. Throughput performance of primary and secondary networks is investigated via extensive numerical analysis considering different levels of implementation complexity due to channel estimation.",
"We address the problem of spectrum pricing in a cognitive radio network where multiple primary service providers compete with each other to offer spectrum access opportunities to the secondary users. By using an equilibrium pricing scheme, each of the primary service providers aims to maximize its profit under quality of service (QoS) constraint for primary users. We formulate this situation as an oligopoly market consisting of a few firms and a consumer. The QoS degradation of the primary services is considered as the cost in offering spectrum access to the secondary users. For the secondary users, we adopt a utility function to obtain the demand function. With a Bertrand game model, we analyze the impacts of several system parameters such as spectrum substitutability and channel quality on the Nash equilibrium (i.e., equilibrium pricing adopted by the primary services). We present distributed algorithms to obtain the solution for this dynamic game. The stability of the proposed dynamic game algorithms in terms of convergence to the Nash equilibrium is studied. However, the Nash equilibrium is not efficient in the sense that the total profit of the primary service providers is not maximized. An optimal solution to gain the highest total profit can be obtained. A collusion can be established among the primary services so that they gain higher profit than that for the Nash equilibrium. However, since one or more of the primary service providers may deviate from the optimal solution, a punishment mechanism may be applied to the deviating primary service provider. A repeated game among primary service providers is formulated to show that the collusion can be maintained if all of the primary service providers are aware of this punishment mechanism, and therefore, properly weight their profits to be obtained in the future."
]
}
|
1404.1675
|
1989741244
|
In this paper, we investigate the joint optimal sensing and distributed Medium Access Control (MAC) protocol design problem for cognitive radio (CR) networks. We consider both scenarios with single and multiple channels. For each scenario, we design a synchronized MAC protocol for dynamic spectrum sharing among multiple secondary users (SUs), which incorporates spectrum sensing for protecting active primary users (PUs). We perform saturation throughput analysis for the corresponding proposed MAC protocols that explicitly capture the spectrum-sensing performance. Then, we find their optimal configuration by formulating throughput maximization problems subject to detection probability constraints for PUs. In particular, the optimal solution of the optimization problem returns the required sensing time for PUs' protection and optimal contention window to maximize the total throughput of the secondary network. Finally, numerical results are presented to illustrate developed theoretical findings in this paper and significant performance gains of the optimal sensing and protocol configuration.
|
There is a rich literature on spectrum sensing for cognitive radio networks (e.g., see @cite_14 and references therein). Classical sensing schemes based on, for example, energy detection techniques or advanced cooperative sensing strategies @cite_2 where multiple secondary users collaborate with one another to improve the sensing performance have been investigated in the literature. There are a large number of papers considering MAC protocol design and analysis for cognitive radio networks @cite_18 - @cite_10 (see @cite_18 for a survey of recent works in this topic). However, these existing works either assumed perfect spectrum sensing or did not explicitly model the sensing imperfection in their design and analysis. In @cite_6 , optimization of sensing and throughput tradeoff under a detection probability constraint was investigated. It was shown that the detection constraint is met with equality at optimality. However, this optimization tradeoff was only investigated for a simple scenario with one pair of secondary users. Extension of this sensing and throughput tradoff to wireless fading channels was considered in @cite_7 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_6",
"@cite_2",
"@cite_10"
],
"mid": [
"2131997203",
"2101840010",
"2143687524",
"2084436032",
"2107441100",
"2121528570"
],
"abstract": [
"In cognitive radio (CR) networks, identifying the available spectrum resource through spectrum sensing, deciding on the optimal sensing and transmission times, and coordinating with the other users for spectrum access are the important functions of the medium access control (MAC) protocols. In this survey, the characteristic features, advantages, and the limiting factors of the existing CR MAC protocols are thoroughly investigated for both infrastructure-based and ad hoc networks. First, an overview of the spectrum sensing is given, as it ensures that the channel access does not result in interference to the licensed users of the spectrum. Next, a detailed classification of the MAC protocols is presented while considering the infrastructure support, integration of spectrum sensing functionalities, the need for time synchronization, and the number of radio transceivers. The main challenges and future research directions are presented, while highlighting the close coupling of the MAC protocol design with the other layers of the protocol stack.",
"The spectrum sensing problem has gained new aspects with cognitive radio and opportunistic spectrum access concepts. It is one of the most challenging issues in cognitive radio systems. In this paper, a survey of spectrum sensing methodologies for cognitive radio is presented. Various aspects of spectrum sensing problem are studied from a cognitive radio perspective and multi-dimensional spectrum sensing concept is introduced. Challenges associated with spectrum sensing are given and enabling spectrum sensing methods are reviewed. The paper explains the cooperative sensing concept and its various forms. External sensing algorithms and other alternative sensing methods are discussed. Furthermore, statistical modeling of network traffic and utilization of these models for prediction of primary user behavior is studied. Finally, sensing features of some current wireless standards are given.",
"In cognitive radio networks, a cognitive source node requires two essential phases to complete a cognitive transmission process: the phase of spectrum sensing with a certain time duration (also referred to as spectrum sensing overhead) to detect a spectrum hole and the phase of data transmission through the detected spectrum hole. In this paper, we focus on the outage probability analysis of cognitive transmissions by considering the two phases jointly to examine the impact of spectrum sensing overhead on system performance. A closed-form expression of an overall outage probability that accounts for both the probability of no spectrum hole detected and the probability of a channel outage is derived for cognitive transmissions over Rayleigh fading channels. We further conduct an asymptotic outage analysis in high signal-to-noise ratio regions and obtain an optimal spectrum sensing overhead solution to minimize the asymptotic outage probability. Besides, numerical results show that a minimized overall outage probability can be achieved through a tradeoff in determining the time durations for the spectrum hole detection and data transmission phases. In this paper, we also investigate the use of cognitive relay to improve the outage performance of cognitive transmissions. We show that a significant improvement is achieved by the proposed cognitive relay scheme in terms of the overall outage probability.",
"In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90 detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.",
"One of the main requirements of cognitive radio systems is the ability to reliably detect the presence of licensed primary transmissions. Previous works on the problem of detection for cognitive radio have suggested the necessity of user cooperation in order to be able to detect at the low signal-to-noise ratios experienced in practical situations. We consider a system of cognitive radio users who cooperate with each other in trying to detect licensed transmissions. Assuming that the cooperating nodes use identical energy detectors, we model the received signals as correlated log-normal random variables and study the problem of fusing the decisions made by the individual nodes. We design a linear-quadratic (LQ) fusion strategy based on a deflection criterion for this problem, which takes into account the correlation between the nodes. Using simulations we show that when the observations at the sensors are correlated, the LQ detector significantly outperforms the counting rule, which is the fusion rule that is obtained by ignoring the correlation.",
"We propose the cross-layer based opportunistic multi-channel medium access control (MAC) protocols, which integrate the spectrum sensing at physical (PHY) layer with the packet scheduling at MAC layer, for the wireless ad hoc networks. Specifically, the MAC protocols enable the secondary users to identify and utilize the leftover frequency spectrum in a way that constrains the level of interference to the primary users. In our proposed protocols, each secondary user is equipped with two transceivers. One transceiver is tuned to the dedicated control channel, while the other is designed specifically as a cognitive radio that can periodically sense and dynamically use the identified un-used channels. To obtain the channel state accurately, we propose two collaborative channel spectrum-sensing policies, namely, the random sensing policy and the negotiation-based sensing policy, to help the MAC protocols detect the availability of leftover channels. Under the random sensing policy, each secondary user just randomly selects one of the channels for sensing. On the other hand, under the negotiation-based sensing policy, different secondary users attempt to select the distinct channels to sense by overhearing the control packets over the control channel. We develop the Markov chain model and the M GY 1-based queueing model to characterize the performance of our proposed multi-channel MAC protocols under the two types of channel-sensing policies for the saturation network and the non-saturation network scenarios, respectively. In the non-saturation network case, we quantitatively identify the tradeoff between the aggregate traffic throughput and the packet transmission delay, which can provide the insightful guidelines to improve the delay-QoS provisionings over cognitive radio wireless networks."
]
}
|
1404.1674
|
2150431092
|
In this paper, we consider the channel assignment problem for cognitive radio networks with hardware-constrained secondary users (SUs). In particular, we assume that SUs exploit spectrum holes on a set of channels where each SU can use at most one available channel for communication. We present the optimal brute-force search algorithm to solve the corresponding nonlinear integer optimization problem and analyze its complexity. Because the optimal solution has exponential complexity with the numbers of channels and SUs, we develop two low-complexity channel assignment algorithms that can efficiently utilize the spectrum holes. In the first algorithm, SUs are assigned distinct sets of channels. We show that this algorithm achieves the maximum throughput limit if the number of channels is sufficiently large. In addition, we propose an overlapping channel assignment algorithm that can improve the throughput performance compared with its nonoverlapping channel assignment counterpart. Moreover, we design a distributed medium access control (MAC) protocol for access contention resolution and integrate it into the overlapping channel assignment algorithm. We then analyze the saturation throughput and the complexity of the proposed channel assignment algorithms. We also present several potential extensions, including the development of greedy channel assignment algorithms under the max-min fairness criterion and throughput analysis, considering sensing errors. Finally, numerical results are presented to validate the developed theoretical results and illustrate the performance gains due to the proposed channel assignment algorithms.
|
Developing efficient spectrum sensing and access mechanisms for cognitive radio networks has been a very active research topic in the last several years @cite_9 , @cite_29 - @cite_15 . A great survey of recent works on MAC protocol design and analysis is given in @cite_29 . In @cite_9 , it was shown that by optimizing the sensing time, a significant throughput gain can be achieved for a SU. In @cite_3 , we extended the result in @cite_9 to the multi-user setting where we design, analyze, and optimize a MAC protocol to achieve optimal tradeoff between sensing time and contention overhead. In fact, we assumed that each SU can use all available channels simultaneously in @cite_3 . Therefore, the channel assignment problem and the exploitation of multi-user diversity do not exist in this setting, which is the topic of our current paper. Another related effort along this line was conducted in @cite_4 where sensing-period optimization and optimal channel-sequencing algorithms were proposed to efficiently discover spectrum holes and to minimize the exploration delay.
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_29",
"@cite_3",
"@cite_15"
],
"mid": [
"2118314698",
"2084436032",
"2131997203",
"1989741244",
"2103667483"
],
"abstract": [
"Sensing monitoring of spectrum-availability has been identified as a key requirement for dynamic spectrum allocation in cognitive radio networks (CRNs). An important issue associated with MAC-layer sensing in CRNs is how often to sense the availability of licensed channels and in which order to sense those channels. To resolve this issue, we address (1) how to maximize the discovery of spectrum opportunities by sensing-period adaptation and (2) how to minimize the delay in finding an available channel. Specifically, we develop a sensing-period optimization mechanism and an optimal channel-sequencing algorithm, as well as an environment- adaptive channel-usage pattern estimation method. Our simulation results demonstrate the efficacy of the proposed schemes and its significant performance improvement over nonoptimal schemes. The sensing-period optimization discovers more than 98 percent of the analytical maximum of discoverable spectrum-opportunities, regardless of the number of channels sensed. For the scenarios tested, the proposed scheme is shown to discover up to 22 percent more opportunities than nonoptimal schemes, which may become even greater with a proper choice of initial sensing periods. The idle-channel discovery delay with the optimal channel-sequencing technique ranges from 0.08 to 0.35 seconds under the tested scenarios, which is much faster than nonoptimal schemes. Moreover, our estimation method is shown to track time-varying channel-parameters accurately.",
"In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90 detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.",
"In cognitive radio (CR) networks, identifying the available spectrum resource through spectrum sensing, deciding on the optimal sensing and transmission times, and coordinating with the other users for spectrum access are the important functions of the medium access control (MAC) protocols. In this survey, the characteristic features, advantages, and the limiting factors of the existing CR MAC protocols are thoroughly investigated for both infrastructure-based and ad hoc networks. First, an overview of the spectrum sensing is given, as it ensures that the channel access does not result in interference to the licensed users of the spectrum. Next, a detailed classification of the MAC protocols is presented while considering the infrastructure support, integration of spectrum sensing functionalities, the need for time synchronization, and the number of radio transceivers. The main challenges and future research directions are presented, while highlighting the close coupling of the MAC protocol design with the other layers of the protocol stack.",
"In this paper, we investigate the joint optimal sensing and distributed Medium Access Control (MAC) protocol design problem for cognitive radio (CR) networks. We consider both scenarios with single and multiple channels. For each scenario, we design a synchronized MAC protocol for dynamic spectrum sharing among multiple secondary users (SUs), which incorporates spectrum sensing for protecting active primary users (PUs). We perform saturation throughput analysis for the corresponding proposed MAC protocols that explicitly capture the spectrum-sensing performance. Then, we find their optimal configuration by formulating throughput maximization problems subject to detection probability constraints for PUs. In particular, the optimal solution of the optimization problem returns the required sensing time for PUs' protection and optimal contention window to maximize the total throughput of the secondary network. Finally, numerical results are presented to illustrate developed theoretical findings in this paper and significant performance gains of the optimal sensing and protocol configuration.",
"As the radio spectrum usage paradigm shifting from the traditional command and control allocation scheme to the open spectrum allocation scheme, wireless ad-hoc networks meet new opportunities and challenges. The open spectrum allocation scheme has potential to provide those networks more capacity, and make them more flexible and reliable. However, the freedom brought by the new spectrum allocation scheme introduces spectrum management and network coordination challenges. Moreover, wireless ad-hoc networks usually rely on a common control channel for operation. Such a control channel may, however, not always available in an open spectrum allocation scheme due to the interference and the need for coexistence with primary users of the spectrum. Instead, common channels most likely exist in a local area.In this paper, we propose a cluster-based framework to form a wireless mesh network in the context of open spectrum sharing. Clusters are constructed by neighbor nodes sharing local common channels, and the network is formed by interconnecting the clusters gradually. We identify issues in such a network and provide mechanisms for neighbor discovery, cluster formation, network formation, and network topology management. The unique feature of this network is its ability to intelligently adapt to the network and radio environment change."
]
}
|
1404.1674
|
2150431092
|
In this paper, we consider the channel assignment problem for cognitive radio networks with hardware-constrained secondary users (SUs). In particular, we assume that SUs exploit spectrum holes on a set of channels where each SU can use at most one available channel for communication. We present the optimal brute-force search algorithm to solve the corresponding nonlinear integer optimization problem and analyze its complexity. Because the optimal solution has exponential complexity with the numbers of channels and SUs, we develop two low-complexity channel assignment algorithms that can efficiently utilize the spectrum holes. In the first algorithm, SUs are assigned distinct sets of channels. We show that this algorithm achieves the maximum throughput limit if the number of channels is sufficiently large. In addition, we propose an overlapping channel assignment algorithm that can improve the throughput performance compared with its nonoverlapping channel assignment counterpart. Moreover, we design a distributed medium access control (MAC) protocol for access contention resolution and integrate it into the overlapping channel assignment algorithm. We then analyze the saturation throughput and the complexity of the proposed channel assignment algorithms. We also present several potential extensions, including the development of greedy channel assignment algorithms under the max-min fairness criterion and throughput analysis, considering sensing errors. Finally, numerical results are presented to validate the developed theoretical results and illustrate the performance gains due to the proposed channel assignment algorithms.
|
In @cite_16 , a control-channel-based MAC protocol was proposed for secondary users to exploit white spaces in the cognitive ad hoc network setting. In particular, the authors of this paper developed both random and negotiation-based spectrum sensing schemes and performed throughput analysis for both saturation and non-saturation scenarios. There exists several other synchronous cognitive MAC protocols, which rely on a control channel for spectrum negotiation and access including those in @cite_31 , @cite_14 , @cite_35 , @cite_0 , @cite_2 . A synchronous MAC protocols without using a control channel was proposed and studied in @cite_33 . In @cite_22 , a MAC layer framework was developed to dynamically reconfigure MAC and physical layer protocols. Here, by monitoring current network metrics the proposed framework can achieve great performance by selecting the best MAC protocol and its corresponding configuration.
|
{
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_33",
"@cite_22",
"@cite_0",
"@cite_2",
"@cite_31",
"@cite_16"
],
"mid": [
"2188462982",
"2113326559",
"2126867611",
"2145772337",
"2152077793",
"2097872599",
"2079855828",
"2121528570"
],
"abstract": [
"We present a MAC protocol for opportunistic spec- trum access (OSA-MAC) in cognitive wireless networks. The proposed MAC protocol works in a multi-channel environment which is capable of performing channel sensing to discover spectrum opportunities. For this MAC protocol, two channel selection methods are considered which trade the implementation complexity with throughput improvement. We then analyze the saturation throughput performance of the proposed MAC protocol under scenarios where the probabilities for each channel to be available to different secondary flows are the same or different. We then derive analytically the probability of col- lision of secondary users with primary users due to sensing errors. This analysis can be used, for example, to determine the requirement of sensing accuracy for secondary users, and to design an admission control method. We present numerical results to demonstrate the throughput performance of the OSA- MAC protocol and applications of the proposed analytical model.",
"Recently, the CR (cognitive radio) technology is gathering more and more attention because it has the capacity to deal with the scarce of the precious spectrum resource. Within the domain of CR technology, channel management of CR is of utmost importance due to its key role in the performance enhancement of the transmission and the minimum interference to the primary users as well. An 802.11 WLAN based ad-hoc protocol using the cognitive radio has been proposed in this paper. It provides the detection and protection for incumbent systems around the communication pair by separating the spectrum into the data channels and common control channel. By adding the available channel list into the RTS and CTS, the communication pair can know which data sub channels are available (i.e., no incumbent signal). We proposed an ENNI (exchanging of neighbor nodes information) mechanism to deal with the hidden incumbent device problem. The simulation results show that by using our protocol the hidden incumbent device problem (HIDP) can be solved successfully.",
"Cognitive networks enable efficient sharing of the radio spectrum. Multi-hop cognitive network is a cooperative network in which cognitive users take help of their neighbors to forward data to the destination. Control signals used to enable cooperation communicate through a common control channel (CCC). Such usage introduces conditions like channel saturation which degrades the overall performance of the network. Thus, exchanging control information is a major challenge in cognitive radio networks. This paper proposes an alternative MAC protocol for multi-hop cognitive radio networks in which the use of a CCC is avoided. The scheme is applicable in heterogeneous environments where channels have different bandwidths and frequencies of operation. It inherently provides a solution to issues like CCC saturation problem, Denial of Service attacks (DoS) and multi-channel hidden problem. The proposed protocol is shown to provide better connectivity and higher throughput than a CCC based protocol, especially when the network is congested.",
"Software-defined cognitive radio has recently made the jump from a purely research driven endeavor to one that is now being driven commercially. Such radios offer the promise of spectrum agility and re-configurability through flexibility at the MAC and physical layers. In the wireless domain, it has been shown that hybrid-MAC layer algorithms can lead to improved overall performance in varying network conditions. A hybrid MAC that uses CSMA in low-contention periods and switches to TDMA in high-contention periods can outperform CSMA or TDMA individually. However, such hybrid systems do not offer the flexibility and cognition required of dynamic spectrum networks. In this paper, we describe MultiMAC, a framework and experimental platform for evaluating algorithms that dynamically reconfigure MAC and physical layer properties. MultiMAC acts as a mediating MAC layer and dynamically reconfigures and or selects from a collection of alternative MAC layers. As a result of monitoring current network metrics, MultiMAC chooses the MAC layer capable of achieving the best performance while ensuring that incoming frames are decoded using the correct MAC layer algorithm. MultiMAC incorporates decision processes to select the appropriate MAC component based on per-node and per-flow statistics. This engine will allow intelligent reconfiguration of the MAC and physical layers in response to changes in external conditions and or requirements optimizing use of the available spectrum",
"A number of algorithmic and protocol assumptions taken for granted in the design of existing wireless communication technologies need to be revisited in extending their scope to the new cognitive radio (CR) paradigm. The fact that channel availability can rapidly change over time and the need for coordinated quiet periods in order to quickly and robustly detect the presence of incumbents, are just some of the examples of the unique challenges in protocol and algorithm design for CR networks and, in particular, in the medium access control (MAC) layer. With this in mind, in this paper we introduce a novel cognitive MAC (C-MAC) protocol for distributed multi-channel wireless networks. C-MAC operates over multiple channels, and hence is able to effectively deal with, among other things, the dynamics of resource availability due to primary users and mitigate the effects of distributed quiet periods utilized for primary user signal detection. In C-MAC, each channel is logically divided into recurring superframes which, in turn, include a slotted beaconing period (BP) where nodes exchange information and negotiate channel usage. Each node transmits a beacon in a designated beacon slot during the BP, which helps in dealing with hidden nodes, medium reservations, and mobility. For coordination amongst nodes in different channels, a rendezvous channel (RC) is employed that is decided dynamically and in a totally distributed fashion. Among other things, the RC is used to support network-wide multicast and broadcast which are often neglected in existing multi-channel MAC protocols. We present promising performance results of C- MAC. We also describe our efforts to implement features of C- MAC in a real CR prototype with Atheros chipset, which currently includes the spectrum sensing module and preliminary features of C-MAC.",
"The MAC protocol of a cognitive radio (CR) device should allow it to access unused or under-utilized spectrum without (or with minimal) interference to primary users dynamically. To fulfill such a goal, we propose a cognitive MAC protocol using statistical channel allocation and call it SCA-MAC in this work. SCA-MAC is a CSMA CA-based protocol, which exploits statistics of spectrum usage for decision making on channel access. For each transmission, the sender negotiates with the receiver on transmission parameters through the control channel. A model is developed for CR devices to evaluate the successful rate of transmission. A CR device should pass the threshold of the successful transmission rate via negotiation before it can begin a valid transmission on data channels. The operating range and channel aggregation are two control parameters introduced to maintain the MAC performance. To validate our ideas, we conducted theoretical analysis and simulations to show that SCA-MAC does improve the throughput performance and guarantee the interference to incumbents to be bounded by a predetermined acceptable rate. The proposed MAC protocol does not need a centralized controller, as the negotiation between the sender and the receiver is performed using the CSMA CA-based algorithm.",
"Recently, cognitive radio technology has attracted more and more attention since it is a novel and effective approach to improve the utilization of the precious radio spectrum. We propose MAC protocols for the cognitive radio based wireless networks. Specifically, the cognitive MAC protocols allow secondary users to identify and use the available frequency spectrum in a way that constrains the level of interference to the primary users. In our schemes, each secondary user is equipped with two transceivers. One of the transceivers is tuned to a dedicated control channel, while the other is used as a cognitive radio that can periodically sense and dynamically use an identified available channels. Our proposed schemes integrate the spectrum sensing at the PHY layer and packet scheduling at the MAC layer. Our schemes smoothly coordinate the two transceivers of the secondary users to enable them to collaboratively sense and dynamically utilize the available frequency spectrum.",
"We propose the cross-layer based opportunistic multi-channel medium access control (MAC) protocols, which integrate the spectrum sensing at physical (PHY) layer with the packet scheduling at MAC layer, for the wireless ad hoc networks. Specifically, the MAC protocols enable the secondary users to identify and utilize the leftover frequency spectrum in a way that constrains the level of interference to the primary users. In our proposed protocols, each secondary user is equipped with two transceivers. One transceiver is tuned to the dedicated control channel, while the other is designed specifically as a cognitive radio that can periodically sense and dynamically use the identified un-used channels. To obtain the channel state accurately, we propose two collaborative channel spectrum-sensing policies, namely, the random sensing policy and the negotiation-based sensing policy, to help the MAC protocols detect the availability of leftover channels. Under the random sensing policy, each secondary user just randomly selects one of the channels for sensing. On the other hand, under the negotiation-based sensing policy, different secondary users attempt to select the distinct channels to sense by overhearing the control packets over the control channel. We develop the Markov chain model and the M GY 1-based queueing model to characterize the performance of our proposed multi-channel MAC protocols under the two types of channel-sensing policies for the saturation network and the non-saturation network scenarios, respectively. In the non-saturation network case, we quantitatively identify the tradeoff between the aggregate traffic throughput and the packet transmission delay, which can provide the insightful guidelines to improve the delay-QoS provisionings over cognitive radio wireless networks."
]
}
|
1404.1674
|
2150431092
|
In this paper, we consider the channel assignment problem for cognitive radio networks with hardware-constrained secondary users (SUs). In particular, we assume that SUs exploit spectrum holes on a set of channels where each SU can use at most one available channel for communication. We present the optimal brute-force search algorithm to solve the corresponding nonlinear integer optimization problem and analyze its complexity. Because the optimal solution has exponential complexity with the numbers of channels and SUs, we develop two low-complexity channel assignment algorithms that can efficiently utilize the spectrum holes. In the first algorithm, SUs are assigned distinct sets of channels. We show that this algorithm achieves the maximum throughput limit if the number of channels is sufficiently large. In addition, we propose an overlapping channel assignment algorithm that can improve the throughput performance compared with its nonoverlapping channel assignment counterpart. Moreover, we design a distributed medium access control (MAC) protocol for access contention resolution and integrate it into the overlapping channel assignment algorithm. We then analyze the saturation throughput and the complexity of the proposed channel assignment algorithms. We also present several potential extensions, including the development of greedy channel assignment algorithms under the max-min fairness criterion and throughput analysis, considering sensing errors. Finally, numerical results are presented to validate the developed theoretical results and illustrate the performance gains due to the proposed channel assignment algorithms.
|
In @cite_13 , a power-controlled MAC protocol was developed to efficiently exploit spectrum access opportunities while satisfactorily protecting PUs by respecting interference constraints. Another power control framework was described in @cite_19 , which aims to meet the rate requirements of SUs and interference constraints of PUs. A novel clustering algorithm was devised in @cite_15 for network formation, topology control, and exploitation of spectrum holes in a cognitive mesh network. It was shown that the proposed clustering mechanism can efficiently adapt to the changes in the network and radio transmission environment.
|
{
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_13"
],
"mid": [
"2157051496",
"2103667483",
"2135562732"
],
"abstract": [
"A multi-channel parallel transmission protocol is proposed for the medium access control in cognitive radio networks (CRNs). This protocol contains two key elements: multi-channel assignment and multi-channel contention. For an incoming flow-based connection request, the minimum number of parallel channels are assigned to satisfy the rate and interference mask constraints. For the contention of the assigned channels, our protocol provides an extension of the single-channel RTS- CTS-DATA-ACK handshaking of the IEEE 802.11 scheme. The proposed MAC coherently integrates optimization results into a practical implementation. Through numerical examples, we verify that our protocol provides lower connection blocking probability and higher system throughput for CRNs than its single-channel counterpart.",
"As the radio spectrum usage paradigm shifting from the traditional command and control allocation scheme to the open spectrum allocation scheme, wireless ad-hoc networks meet new opportunities and challenges. The open spectrum allocation scheme has potential to provide those networks more capacity, and make them more flexible and reliable. However, the freedom brought by the new spectrum allocation scheme introduces spectrum management and network coordination challenges. Moreover, wireless ad-hoc networks usually rely on a common control channel for operation. Such a control channel may, however, not always available in an open spectrum allocation scheme due to the interference and the need for coexistence with primary users of the spectrum. Instead, common channels most likely exist in a local area.In this paper, we propose a cluster-based framework to form a wireless mesh network in the context of open spectrum sharing. Clusters are constructed by neighbor nodes sharing local common channels, and the network is formed by interconnecting the clusters gradually. We identify issues in such a network and provide mechanisms for neighbor discovery, cluster formation, network formation, and network topology management. The unique feature of this network is its ability to intelligently adapt to the network and radio environment change.",
"Cognitive radio (CR) is the key enabling technology for an efficient dynamic spectrum access. It aims at exploiting an underutilized licensed spectrum by enabling opportunistic communications for unlicensed users. In this work, we first develop a distributed cognitive radio MAC (COMAC) protocol that enables unlicensed users to dynamically utilize the spectrum while limiting the interference on primary (PR) users. The main novelty in COMAC lies in not assuming a predefined CR-to-PR power mask and not requiring active coordination with PR users. COMAC provides a statistical performance guarantee for PR users by limiting the fraction of the time during which the PR users' reception is negatively affected by CR transmissions. To provide such a guarantee, we develop probabilistic models for the PR-to-PR and the PR-to-CR interference under a Rayleigh fading channel model. From these models, we derive closed-form expressions for the mean and variance of interference. Empirical results show that the distribution of the interference is approximately lognormal. Based on the developed interference models, we derive a closed-form expression for the maximum allowable power for a CR transmission. We extend the min-hop routing to exploit the available channel information for improving the perceived throughput. Our simulation results indicate that COMAC satisfies its target soft guarantees under different traffic loads and arbitrary user deployment scenarios. Results also show that exploiting the available channel information for the routing decisions can improve the end-to-end throughput of the CR network (CRN)."
]
}
|
1404.1089
|
2027242841
|
The Hamilton Jacobi Bellman Equation (HJB) provides the globally optimal solution to large classes of control problems. Unfortunately, this generality comes at a price, the calculation of such solutions is typically intractible for systems with more than moderate state space size due to the curse of dimensionality. This work combines recent results in the structure of the HJB, and its reduction to a linear Partial Differential Equation (PDE), with methods based on low rank tensor representations, known as a separated representations, to address the curse of dimensionality. The result is an algorithm to solve optimal control problems which scales linearly with the number of states in a system, and is applicable to systems that are nonlinear with stochastic forcing in finite-horizon, average cost, and first-exit settings. The method is demonstrated on inverted pendulum, VTOL aircraft, and quadcopter models, with system dimension two, six, and twelve respectively.
|
Tensor approximations have historically been developed with the goal of approximating high dimensional data, yielding rise to the framework used here under the names CANDECOMP PARAFAC @cite_26 @cite_32 . However, @cite_6 demonstrated that these approximation techniques were applicable to the linear systems describing discretized PDEs. This technique has been applied in several domains, including computational chemistry and quantum physics, among others @cite_9 . In particular, @cite_5 examines their use in the context of stationary Fokker-Planck equations. There are interesting connections between the fundamental task of these techniques, approximating a tensor with one of lower rank, and convex relaxation based methods @cite_42 @cite_36 . Unfortunately, low rank tensor approximation is NP-hard in general, and an optimal solution is not to be expected @cite_0 .
|
{
"cite_N": [
"@cite_26",
"@cite_36",
"@cite_9",
"@cite_42",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_5"
],
"mid": [
"2121739212",
"2078677240",
"1989786408",
"2167077875",
"2000215628",
"2073469810",
"2106221905",
"1985692767"
],
"abstract": [
"Simple structure and other common principles of factor rotation do not in general provide strong grounds for attributing explanatory significance to the factors which they select. In contrast, it is shown that an extension of Cattell's principle of rotation to Proportional Profiles (PP) offers a basis for determining explanatory factors for three-way or higher order multi-mode data. Conceptual models are developed for two basic patterns of multi-mode data variation, systemand object-variation, and PP analysis is found to apply in the system-variation case. Although PP was originally formulated as a principle of rotation to be used with classic two-way factor analysis, it is shown to embody a latent three-mode factor model, which is here made explicit and generalized frown two to N \"parallel occasions\". As originally formulated, PP rotation was restricted to orthogonal factors. The generalized PP model is demonstrated to give unique \"correct\" solutions with oblique, non-simple structure, and even non-linear factor structures. A series of tests, conducted with synthetic data of known factor composition, demonstrate the capabilities of linear and non-linear versions of the model, provide data on the minimal necessary conditions of uniqueness, and reveal the properties of the analysis procedures when these minimal conditions are not fulfilled. In addition, a mathematical proof is presented for the uniqueness of the solution given certain conditions on the data. Three-mode PP factor analysis is applied to a three-way set of real data consisting of the fundamental and first three formant frequencies of 11 persons saying 8 vowels. A unique solution is extracted, consisting of three factors which are highly meaningful and consistent with prior knowledge and theory concerning vowel quality. The relationships between the three-mode PP model and Tucker's multi-modal model, McDonald's non-linear model and Carroll and Chang's multi-dimensional scaling model are explored.",
"In this paper we consider sparsity on a tensor level, as given by the n-rank of a tensor. In an important sparse-vector approximation problem (compressed sensing) and the low-rank matrix recovery problem, using a convex relaxation technique proved to be a valuable solution strategy. Here, we will adapt these techniques to the tensor setting. We use the n-rank of a tensor as a sparsity measure and consider the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-rank that fulfills some linear constraints. We introduce a tractable convex relaxation of the n-rank and propose efficient algorithms to solve the low-n-rank tensor recovery problem numerically. The algorithms are based on the Douglas–Rachford splitting technique and its dual variant, the alternating direction method of multipliers.",
"Abstract In the present paper, we give a survey of the recent results and outline future prospects of the tensor-structured numerical methods in applications to multidimensional problems in scientific computing. The guiding principle of the tensor methods is an approximation of multivariate functions and operators relying on a certain separation of variables. Along with the traditional canonical and Tucker models, we focus on the recent quantics-TT tensor approximation method that allows to represent N-d tensors with log-volume complexity, O ( d log N ). We outline how these methods can be applied in the framework of tensor truncated iteration for the solution of the high-dimensional elliptic parabolic equations and parametric PDEs. Numerical examples demonstrate that the tensor-structured methods have proved their value in application to various computational problems arising in quantum chemistry and in the multi-dimensional parametric FEM BEM modeling—the tool apparently works and gives the promise for future use in challenging high-dimensional applications.",
"In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming.",
"An individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common “psychological space”. A corresponding method of analyzing similarities data is proposed, involving a generalization of “Eckart-Young analysis” to decomposition of three-way (or higher-way) tables. In the present case this decomposition is applied to a derived three-way table of scalar products between stimuli for individuals. This analysis yields a stimulus by dimensions coordinate matrix and a subjects by dimensions matrix of weights. This method is illustrated with data on auditory stimuli and on perception of nations.",
"Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we further develop this representation by (i) discussing the variety of mechanisms that allow it to be surprisingly efficient; (ii) addressing the issue of conditioning; (iii) presenting algorithms for solving linear systems within this framework; and (iv) demonstrating methods for dealing with antisymmetric functions, as arise in the multiparticle Schrodinger equation in quantum mechanics. Numerical examples are given.",
"Abstract We prove that computing the rank of a three-dimensional tensor over any finite field is NP-complete. Over the rational numbers the problem is NP-hard.",
"This paper addresses the curse of dimensionality in the numerical solution of stationary Fokker-Planck equa- tions. Combined with Chebyshev spectral differentiation, the tensor approach significantly reduces the degrees of freedom of the approximation essentially in exchange for nonlinearity, such that the resulting discretized nonlinear system is solved by alternating least squares. Enforcement of the normality condition via a penalty method avoids the need for exploration of the the null space of the discretized Fokker-Planck operator. The proposed method enables a drastic reduction of degrees of freedom required to maintain accuracy as dimensionality increases. Numerical results are presented to illustrate the effectiveness of the proposed method. I. INTRODUCTION"
]
}
|
1404.0662
|
1823468964
|
I study two privacy-preserving social network graphs to dis- close the types of relationships of connecting edges and provide flexible multigrained access control. To create such graphs, my schemes employ the concept of secretaries and types of relationships. It is significantly more efficient than those that using expensive cryptographic primitives. I also show how these schemes can be used for multigrained access control with various options. In addition, I describe how much these schemes are resilient to infer the types of connecting edges.
|
Hay @cite_14 defined @math -candidate anonymity model for social network such that for every structure query over the graph, there exist at least @math nodes that match the query. They then suggested a graph perturbation through modifying the graph by edge-deletions followed by edge-insertion. This approach could be an attacker's capability of re-identification of identities; however, it could not support @math -candidate anonymity.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"1650675509"
],
"abstract": [
"In today’s chaotic network, data and services are mobile and replicated widely for availability, durability, and locality. Components within this infrastructure interact in rich and complex ways, greatly stressing traditional approaches to name service and routing. This paper explores an alternative to traditional approaches called Tapestry. Tapestry is an overlay location and routing infrastructure that provides location-independent routing of messages directly to the closest copy of an object or service using only point-to-point links and without centralized resources. The routing and directory information within this infrastructure is purely soft state and easily repaired. Tapestry is self-administering, faulttolerant, and resilient under load. This paper presents the architecture and algorithms of Tapestry and explores their advantages through a number of experiments."
]
}
|
1404.0662
|
1823468964
|
I study two privacy-preserving social network graphs to dis- close the types of relationships of connecting edges and provide flexible multigrained access control. To create such graphs, my schemes employ the concept of secretaries and types of relationships. It is significantly more efficient than those that using expensive cryptographic primitives. I also show how these schemes can be used for multigrained access control with various options. In addition, I describe how much these schemes are resilient to infer the types of connecting edges.
|
Liu and Terzi @cite_10 suggested @math -degree anonymous graph to reduce the possibility of an attacker's structural inference of a graph from the degree of each node. A graph is k-degree anonymous graph, for every node there exists @math number of other nodes with the same number of degrees. They argued that this was efficient anonymizing graph technique; however, creating such graph requires to probe all nodes and their edges respectively. Moreover, the position of created nodes with same degrees should be considered.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1998091733"
],
"abstract": [
"The proliferation of network data in various application domains has raised privacy concerns for the individuals involved. Recent studies show that simply removing the identities of the nodes before publishing the graph social network data does not guarantee privacy. The structure of the graph itself, and in its basic form the degree of the nodes, can be revealing the identities of individuals. To address this issue, we study a specific graph-anonymization problem. We call a graph k-degree anonymous if for every node v, there exist at least k-1 other nodes in the graph with the same degree as v. This definition of anonymity prevents the re-identification of individuals by adversaries with a priori knowledge of the degree of certain nodes. We formally define the graph-anonymization problem that, given a graph G, asks for the k-degree anonymous graph that stems from G with the minimum number of graph-modification operations. We devise simple and efficient algorithms for solving this problem. Our algorithms are based on principles related to the realizability of degree sequences. We apply our methods to a large spectrum of synthetic and real datasets and demonstrate their efficiency and practical utility."
]
}
|
1404.0662
|
1823468964
|
I study two privacy-preserving social network graphs to dis- close the types of relationships of connecting edges and provide flexible multigrained access control. To create such graphs, my schemes employ the concept of secretaries and types of relationships. It is significantly more efficient than those that using expensive cryptographic primitives. I also show how these schemes can be used for multigrained access control with various options. In addition, I describe how much these schemes are resilient to infer the types of connecting edges.
|
Zou and Pei @cite_12 described an algorithm for the subgraph created by the immediate @math neighbors of a node based on greedy heuristics. They defined @math -neighborhood anonymity of a graph. Given a graph @math , for each node @math , the algorithm finds @math neighbor nodes for @math . For every pair of node @math and for each pair @math , the algorithm modifies the neighborhood subgraph of @math and the neighborhood subgraph of @math to make them isomorphic to each other. However, this algorithm is heuristic, moreover, expensive approach.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2096296626"
],
"abstract": [
"Recently, as more and more social network data has been published in one way or another, preserving privacy in publishing social network data becomes an important concern. With some local knowledge about individuals in a social network, an adversary may attack the privacy of some victims easily. Unfortunately, most of the previous studies on privacy preservation can deal with relational data only, and cannot be applied to social network data. In this paper, we take an initiative towards preserving privacy in social network data. We identify an essential type of privacy attacks: neighborhood attacks. If an adversary has some knowledge about the neighbors of a target victim and the relationship among the neighbors, the victim may be re-identified from a social network even if the victim's identity is preserved using the conventional anonymization techniques. We show that the problem is challenging, and present a practical solution to battle neighborhood attacks. The empirical study indicates that anonymized social networks generated by our method can still be used to answer aggregate network queries with high accuracy."
]
}
|
1404.0662
|
1823468964
|
I study two privacy-preserving social network graphs to dis- close the types of relationships of connecting edges and provide flexible multigrained access control. To create such graphs, my schemes employ the concept of secretaries and types of relationships. It is significantly more efficient than those that using expensive cryptographic primitives. I also show how these schemes can be used for multigrained access control with various options. In addition, I describe how much these schemes are resilient to infer the types of connecting edges.
|
Zheleva and Getoor @cite_1 addressed the problem of preserving privacy of sensitive relationship in a social network. The authors introduced five different edge anonymization algorithms by adopting different removing edges or clustering edges between nodes under the very specific assumptions.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1544573083"
],
"abstract": [
"In this paper, we focus on the problem of preserving the privacy of sensitive relationships in graph data. We refer to the problem of inferring sensitive relationships from anonymized graph data as link reidentification. We propose five different privacy preservation strategies, which vary in terms of the amount of data removed (and hence their utility) and the amount of privacy preserved. We assume the adversary has an accurate predictive model for links, and we show experimentally the success of different link re-identification strategies under varying structural characteristics of the data."
]
}
|
1404.0662
|
1823468964
|
I study two privacy-preserving social network graphs to dis- close the types of relationships of connecting edges and provide flexible multigrained access control. To create such graphs, my schemes employ the concept of secretaries and types of relationships. It is significantly more efficient than those that using expensive cryptographic primitives. I also show how these schemes can be used for multigrained access control with various options. In addition, I describe how much these schemes are resilient to infer the types of connecting edges.
|
Interesting observation is that most related work have been done here with respect to analysis of disclose relationships in a social network @cite_9 , @cite_4 , @cite_0 instead of suggesting new countermeasures. Narayana and Shmatikov @cite_9 pointed out that the anonymization of a graph was not sufficient for privacy in a social network graph.
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_4"
],
"mid": [
"2163596844",
"",
"2063742835"
],
"abstract": [
"Connections in distributed systems, such as social networks, online communities or peer-to-peer networks, form complex graphs. These graphs are of interest to scientists in fields as varied as marketing, epidemiology and psychology. However, knowledge of the graph is typically distributed among a large number of subjects, each of whom knows only a small piece of the graph. Efforts to assemble these pieces often fail because of privacy concerns: subjects refuse to share their local knowledge of the graph. To assuage these privacy concerns, we propose reconstructing the whole graph privately, i.e., in a way that hides the correspondence between the nodes and edges in the graph and the real-life entities and relationships that they represent. We first model the privacy threats posed by the private reconstruction of a distributed graph. Our model takes into account the possibility that malicious nodes may report incorrect information about the graph in order to facilitate later attempts to de-anonymize the reconstructed graph. We then propose protocols to privately assemble the pieces of a graph in ways that mitigate these threats. These protocols severely restrict the ability of adversaries to compromise the privacy of honest subjects.",
"",
"We consider a privacy threat to a social network in which the goal of an attacker is to obtain knowledge of a significant fraction of the links in the network. We formalize the typical social network interface and the information about links that it provides to its users in terms of lookahead. We consider a particular threat where an attacker subverts user accounts to get information about local neighborhoods in the network and pieces them together in order to get a global picture. We analyze, both experimentally and theoretically, the number of user accounts an attacker would need to subvert for a successful attack, as a function of his strategy for choosing users whose accounts to subvert and a function of lookahead provided by the network. We conclude that such an attack is feasible in practice, and thus any social network that wishes to protect the link privacy of its users should take great care in choosing the lookahead of its interface, limiting it to 1 or 2, whenever possible."
]
}
|
1404.0662
|
1823468964
|
I study two privacy-preserving social network graphs to dis- close the types of relationships of connecting edges and provide flexible multigrained access control. To create such graphs, my schemes employ the concept of secretaries and types of relationships. It is significantly more efficient than those that using expensive cryptographic primitives. I also show how these schemes can be used for multigrained access control with various options. In addition, I describe how much these schemes are resilient to infer the types of connecting edges.
|
Carminati @cite_8 described an access control model where authorized users determined based on the relationships are only granted to access to another user's social network. They stored the encrypted relationship information with relationship keys using public key cryptography.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2095966401"
],
"abstract": [
"Current social networks implement very simple protection mechanisms, according to which a user can slate whether his her personal data, relationships, and resources should be either public or accessible only by him herself (or, at most, by users with whom he she has a direct relationship). This is not enough, in that there is the need of more flexible mechanisms, making a user able to decide which network participants are authorized to access his lier resources and personal information. With this aim, we have proposed an access control model where authorized users are denoted based on the relationships they participate in. Nonetheless, we believe that this is just a first step towards a more comprehensive privacy framework for social networks. Indeed, besides users' resources and personal data, also users' relationships may convey sensitive information. For this reason, in this paper we focus on relationship protection, by proposing a strategy exploiting cryptographic techniques to enfotce a selective dissemination of information concerning relationships across a social network."
]
}
|
1404.0662
|
1823468964
|
I study two privacy-preserving social network graphs to dis- close the types of relationships of connecting edges and provide flexible multigrained access control. To create such graphs, my schemes employ the concept of secretaries and types of relationships. It is significantly more efficient than those that using expensive cryptographic primitives. I also show how these schemes can be used for multigrained access control with various options. In addition, I describe how much these schemes are resilient to infer the types of connecting edges.
|
Frikken and Srinivas @cite_2 suggested a key management scheme in social networks without a trusted third party. With the derived keys, it protects user's content as well as enhances access control to a social network. A key for each user is derived based on the proximity of relationship between users. Suppose Alice, a friend of Bob, can access Bob's social network with her friendship of Bob's key and Carol, a friend of friend of Bob, can access Bob's social network with her friend of friend of Bob's key. Access can be controlled based on the those relationship keys that have different access policy of Bob's. This access control is done asynchronously, which means it does not require multiple users to be online simultaneously for the access control to allow access other users. For the social network graph protection, they encrypts destination nodes as well as edges in between. One drawback as the authors mentioned is that malicious users could access to the content of other unauthorized users by publishing other users' keys.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1986789125"
],
"abstract": [
"In this paper we introduce a novel scheme for key management in social networks that is a first step towards the creation of a private social network. A social network graph (i.e., the graph of friendship relationships) is private and social networks are often used to share content, which may be private, amongst its users. In the status quo, the social networking server has access to both this graph and to all of the content, effectively requiring that it is a trusted third party. The goal of this paper is to produce a mechanism through which users can control how their content is shared with other users, without relying on a trusted third party to manage the social network graph and the users' data. The specific access control model considered here is that users will specify access policies based on distance in the social network; for example some content is visible to friends only, while other content is visible to friends of friends, etc. This access control is enforced via key management. That is for each user, there is a key that only friends should be able to derive, there is a key that both friends of the user and friends of friends can derive, etc. The proposed scheme enjoys the following properties: i) the scheme is asynchronous in that it does not require users to be online at the same time, ii) the scheme provides key indistinguishability (that is if a user is not allowed to derive a key according to the access policy, then that key is indistinguishable from a random value), iii) the scheme is efficient in terms of server storage and key derivation time, and iv) the scheme is collusion resistant."
]
}
|
1404.1124
|
162959154
|
Cloud computing is a newly emerging distributed computing which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes so that the tasks can get a balanced allocation or each task's execution cost decreases to the minimum or the overall system performance is optimal. Unlike the previous task slices' sequential execution of an independent task in the model of which the target is processing time, we build a model that targets at the response time, in which the task slices are executed in parallel. Then we give its solution with a method based on an improved adjusting entropy function. At last, we design a new task scheduling algorithm. Experimental results show that the response time of our proposed algorithm is much lower than the game-theoretic algorithm and balanced scheduling algorithm and compared with the balanced scheduling algorithm, game-theoretic algorithm is not necessarily superior in parallel although its objective function value is better.
|
For balanced task scheduling, @cite_8 @cite_9 @cite_14 proposed some models and task scheduling algorithms in distributed system with the market model and game theory. @cite_6 @cite_5 introduced a balanced grid task scheduling model based on non-cooperative game. QoS-based grid job allocation problem is modeled as a cooperative game and the structure of the Nash bargaining solution is given in @cite_24 . In @cite_12 , Wei and Vasilakos presented a game theoretic method to schedule dependent computational cloud computing services with time and cost constrained, in which the tasks are divided into subtasks. The above works generally take the scheduler or job manager as the participant of the game, take the total execution time of tasks as the game optimization goals and give the proof of the existence of the Nash equilibrium solution and the solving Nash equilibrium solution algorithm, or model the task scheduling problem as a cooperative game and give the structure of the cooperative game solution.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_5",
"@cite_12"
],
"mid": [
"2164833497",
"2138978154",
"2132104966",
"2370429505",
"2122144684",
"2129059987",
"2143611771"
],
"abstract": [
"In this paper, we present a game theoretic framework for obtaining a user-optimal load balancing scheme in heterogeneous distributed systems. We formulate the static load balancing problem in heterogeneous distributed systems as a noncooperative game among users. For the proposed noncooperative load balancing game, we present the structure of the Nash equilibrium. Based on this structure we derive a new distributed load balancing algorithm. Finally, the performance of our noncooperative load balancing scheme is compared with that of other existing schemes. The main advantages of our load balancing scheme are the distributed structure, low complexity and optimality of allocation for each user.",
"We applied techniques from game theory to help formulate and analyze solutions to two systems problems: discouraging selfishness in multi-hop wireless networks and enabling cooperation among ISPs in the Internet. It proved difficult to do so. This paper reports on our experiences and explains the issues that we encountered. It describes the ways in which the straightforward use of results from traditional game theory did not fit well with the requirements of our problems. It also identifies an important characteristic of the solutions we did eventually adopt that distinguishes them from those available using game theoretic approaches. We hope that this discussion will help to highlight formulations of game theory which are well-suited for problems involving computer systems.",
"We study the nature of sharing resources in distributed collaborations such as Grids and peer-to-peer systems. By applying the theoretical framework of the multi-person prisoner's dilemma to this resource sharing problem, we show that in the absence of incentive schemes, individual users are apt to hold back resources, leading to decreased system utility. Using both the theoretical framework as well as simulations, we compare and contrast three different incentive schemes aimed at encouraging users to contribute resources. Our results show that soft-incentive schemes are effective in incentivizing autonomous entities to collaborate, leading to increased gains for all participants in the system.",
"At present,grid task scheduling Algorithms focus on 1×n type grid,namely one scheduler and n resources but neglect m×n type grid.We built a Grid model of m×n type grid using M M 1 queue system,and promoted the concept of task scheduling Nash equilibrium among multi-schedulers.The optimal objective of each scheduler is mean complete time per task.The Nash equilibrium took advantage of PSO to be solved.By simulations,we conclude that the new algorithm is better than the algorithm based on the mean scheduling strategies in mean finished task numbers per time,mean load of network and mean load of Grid resources.",
"A grid differs from traditional high performance computing systems in the heterogeneity of the computing nodes as well as the communication links that connect the different nodes together. In grids there exist users and service providers. The service providers provide the service for jobs that the users generate. Typically the amount of jobs generated by all the users are more than any single provider can handle alone with any acceptable quality of service (QoS). As such, the service providers need to cooperate and allocate jobs among them so that each is providing an acceptable QoS to their customers. QoS is of particular concerns to service providers as it directly affects customers' satisfaction and loyalty. In this paper, we propose a game theoretic solution to the QoS sensitive, grid job allocation problem. We model the QoS based, grid job allocation problem as a cooperative game and present the structure of the Nash Bargaining Solution. The proposed algorithm is fair to all users and represents a Pareto optimal solution to the QoS objective. One advantage of our scheme is the relatively low overhead and robust performance against inaccuracies in performance prediction information.",
"Load balancing is a very important and complex problem in computational grids. A computational grid differs from traditional high-performance computing systems in the heterogeneity of the computing nodes, as well as the communication links that connect the different nodes together. There is a need to develop algorithms that can capture this complexity yet can be easily implemented and used to solve a wide range of load-balancing scenarios. In this paper, we propose a game-theoretic solution to the grid load-balancing problem. The algorithm developed combines the inherent efficiency of the centralized approach and the fault-tolerant nature of the distributed, decentralized approach. We model the grid load-balancing problem as a noncooperative game, whereby the objective is to reach the Nash equilibrium. Experiments were conducted to show the applicability of the proposed approaches. One advantage of our scheme is the relatively low overhead and robust performance against inaccuracies in performance prediction information.",
"Cloud computing is a natural evolution for data and compute centers with automated systems management, workload balancing, and virtualization technologies. Cloud-based services integrate globally distributed resources into seamless computing platforms. In an open cloud computing framework, scheduling tasks with guaranteeing QoS constrains presents a challenging technical problem. This paper presents a game theoretic method to schedule dependent computational cloud computing services with time and cost constrained. An evolutionary mechanism is designed to fairly and approximately solve the NP-hard scheduling problem."
]
}
|
1404.0400
|
1995396582
|
Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this paper we propose the use of such computational modules for extracting invariant and discriminative audio representations. Building on a theory of invariance in hierarchical architectures, we propose a novel, mid-level representation for acoustical signals, using the empirical distributions of projections on a set of templates and their transformations. Under the assumption that, by construction, this dictionary of templates is composed from similar classes, and samples the orbit of variance-inducing signal transformations (such as shift and scale), the resulting signature is theoretically guaranteed to be unique, invariant to transformations and stable to deformations. Modules of projection and pooling can then constitute layers of deep networks, for learning composite representations. We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.
|
Deep learning and convolutional networks (CNNs) have been recently applied for learning mid- and high- level audio representations, motivated by successes in improving image and speech recognition. Unsupervised, hierarchical audio representations from Convolutional Deep Belief Networks (CDBNs) have improved music genre classification over MFCC and spectrogram-based features @cite_5 . Similarly, Deep Belief Networks (DBNs) were applied for learning music representations in the spectral domain @cite_1 and unsupervised, sparse-coding based learning for audio features @cite_2 .
|
{
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_2"
],
"mid": [
"2107789863",
"2295991281",
"2188492526"
],
"abstract": [
"In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks. In the case of speech data, we show that the learned features correspond to phones phonemes. In addition, our feature representations learned from unlabeled audio data show very good performance for multiple audio classification tasks. We hope that this paper will inspire more research on deep learning approaches applied to a wide range of audio recognition tasks.",
"Feature extraction is a crucial part of many MIR tasks. In this work, we present a system that can automatically extract relevant features from audio for a given task. The feature extraction system consists of a Deep Belief Network (DBN) on Discrete Fourier Transforms (DFTs) of the audio. We then use the activations of the trained network as inputs for a non-linear Support Vector Machine (SVM) classifier. In particular, we learned the features to solve the task of genre recognition. The learned features perform significantly better than MFCCs. Moreover, we obtain a classification accuracy of 84.3 on the Tzanetakis dataset, which compares favorably against state-of-the-art genre classifiers using frame-based features. We also applied these same features to the task of auto-tagging. The autotaggers trained with our features performed better than those that were trained with timbral and temporal features.",
"In this work we present a system to automatically learn features from audio in an unsupervised manner. Our method first learns an overcomplete dictionary which can be used to sparsely decompose log-scaled spectrograms. It then trains an efficient encoder which quickly maps new inputs to approximations of their sparse representations using the learned dictionary. This avoids expensive iterative procedures usually required to infer sparse codes. We then use these sparse codes as inputs for a linear Support Vector Machine (SVM). Our system achieves 83.4 accuracy in predicting genres on the GTZAN dataset, which is competitive with current state-of-the-art approaches. Furthermore, the use of a simple linear classifier combined with a fast feature extraction system allows our approach to scale well to large datasets."
]
}
|
1404.0400
|
1995396582
|
Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this paper we propose the use of such computational modules for extracting invariant and discriminative audio representations. Building on a theory of invariance in hierarchical architectures, we propose a novel, mid-level representation for acoustical signals, using the empirical distributions of projections on a set of templates and their transformations. Under the assumption that, by construction, this dictionary of templates is composed from similar classes, and samples the orbit of variance-inducing signal transformations (such as shift and scale), the resulting signature is theoretically guaranteed to be unique, invariant to transformations and stable to deformations. Modules of projection and pooling can then constitute layers of deep networks, for learning composite representations. We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.
|
A mathematical framework that formalizes the computation of invariant and stable representations via cascaded (deep) wavelet transforms has been proposed in @cite_16 . In this work, we propose computing an audio representation through biologically plausible modules of projection and pooling, based on a theory of invariance in the ventral stream of the visual cortex @cite_19 . The proposed representation can be extended to hierarchical architectures of layers of invariance". An additional advantage is that it can be applied to building invariant representations from arbitrary signals without explicitly modeling the underlying transformations, which can be arbitrarily complex but smooth.
|
{
"cite_N": [
"@cite_19",
"@cite_16"
],
"mid": [
"1650052828",
"1994906459"
],
"abstract": [
"Representations that are invariant to translation, scale and other transformations, can considerably reduce the sample complexity of learning, allowing recognition of new object classes from very few examples|a hallmark of human recognition. Empirical estimates of one-dimensional projections of the distribution induced by a group of ane transformations are proven to represent a unique and invariant signature associated with an image. We show how projections yielding invariant signatures for future images can be learned automatically, and updated continuously, during unsupervised visual experience. A module performing ltering and pooling, like simple and complex cells as proposed by Hubel and Wiesel, can compute such estimates. Under this view, a pooling stage estimates a one-dimensional probability distribution. Invariance from observations through a restricted window is equivalent to a sparsity property w.r.t. to a transformation, which yields templates that are a) Gabor for optimal simultaneous invariance to translation and scale or b) very specic for complex, class-dependent transformations such as rotation in depth of faces. Hierarchical architectures consisting of this basic Hubel-Wiesel module inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts, and are invariant to complex transformations that may only be locally ane. The theory applies to several existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects images which is invariant to transformations, stable, and discriminative for recognition|and that this representation may be continuously learned in an unsupervised way during development and natural visual experience.",
"This paper constructs translation-invariant operators on L 2 .R d , which are Lipschitz-continuous to the action of diffeomorphisms. A scattering propagator is a path-ordered product of nonlinear and noncommuting operators, each of which computes the modulus of a wavelet transform. A local integration defines a windowed scattering transform, which is proved to be Lipschitz-continuous to the action of C 2 diffeomorphisms. As the window size increases, it converges to a wavelet scattering transform that is translation invariant. Scattering coefficients also provide representations of stationary processes. Expected values depend upon high-order moments and can discriminate processes having the same power spectrum. Scattering operators are extended on L 2 .G , where G is a compact Lie group, and are invariant under the action of G. Combining a scattering on L 2 .R d and on L 2 .SO.d defines a translation- and rotation-invariant scattering on L 2 .R d . © 2012 Wiley Periodicals, Inc."
]
}
|
1404.0400
|
1995396582
|
Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this paper we propose the use of such computational modules for extracting invariant and discriminative audio representations. Building on a theory of invariance in hierarchical architectures, we propose a novel, mid-level representation for acoustical signals, using the empirical distributions of projections on a set of templates and their transformations. Under the assumption that, by construction, this dictionary of templates is composed from similar classes, and samples the orbit of variance-inducing signal transformations (such as shift and scale), the resulting signature is theoretically guaranteed to be unique, invariant to transformations and stable to deformations. Modules of projection and pooling can then constitute layers of deep networks, for learning composite representations. We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.
|
Representations of music directly from the temporal or spectral domain can be very sensitive to small time and frequency deformations, which affect the signal but not its musical characteristics. In order to get stable representations, pooling (or aggregation) over time frequency is applied to smooth-out such variability. Conventional MFSCs use filters with wider bands in higher frequencies to compensate for the instability to deformations of the high-spectral signal components. The scattering transform @cite_17 @cite_4 keeps the low pass component of cascades of wavelet transforms as a layer-by-layer average over time. Pooling over time or frequency is also crucial for CNNs applied to speech and audio @cite_5 @cite_10 .
|
{
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_4",
"@cite_17"
],
"mid": [
"2107789863",
"",
"2093231248",
"2404938947"
],
"abstract": [
"In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification tasks. In the case of speech data, we show that the learned features correspond to phones phonemes. In addition, our feature representations learned from unlabeled audio data show very good performance for multiple audio classification tasks. We hope that this paper will inspire more research on deep learning approaches applied to a wide range of audio recognition tasks.",
"",
"A scattering transform defines a locally translation invariant representation which is stable to time-warping deformation. It extends MFCC representations by computing modulation spectrum coefficients of multiple orders, through cascades of wavelet convolutions and modulus operators. Second-order scattering coefficients characterize transient phenomena such as attacks and amplitude modulation. A frequency transposition invariant representation is obtained by applying a scattering transform along log-frequency. State-the-of-art classification results are obtained for musical genre and phone classification on GTZAN and TIMIT databases, respectively.",
"Mel-frequency cepstral coefficients (MFCCs) are efficient audio descriptors providing spectral energy measurements over short time windows of length 23 ms. These measurements, however, lose non-stationary spectral information such as transients or time-varying structures. It is shown that this information can be recovered as spectral co-occurrence coefficients. Scattering operators compute these coefficients with a cascade of wavelet filter banks and modulus rectifiers. The signal can be reconstructed from scattering coefficients by inverting these wavelet modulus operators. An application to genre classification shows that second-order cooccurrence coefficients improve results obtained by MFCC and Delta-MFCC descriptors. 1"
]
}
|
1404.0425
|
2018710526
|
In this paper, we propose a novel reservation system to study partition information and its transmission over a noise-free Boolean multiaccess channel. The objective of transmission is not to restore the message, but to partition active users into distinct groups so that they can, subsequently, transmit their messages without collision. We first calculate (by mutual information) the amount of information needed for the partitioning without channel effects, and then propose two different coding schemes to obtain achievable transmission rates over the channel. The first one is the brute force method, where the codebook design is based on centralized source coding; the second method uses random coding, where the codebook is generated randomly and optimal Bayesian decoding is employed to reconstruct the partition. Both methods shed light on the internal structure of the partition problem. A novel formulation is proposed for the random coding scheme, in which a sequence of channel operations and interactions induces a hypergraph. The formulation intuitively describes the transmitted information in terms of a strong coloring of this hypergraph. An extended Fibonacci structure is constructed for the simple, but nontrivial, case with two active users. A comparison between these methods and group testing is conducted to demonstrate the potential of our approaches.
|
In addition to the conflict resolution problems, there have been extensive works on direct transmission and group testing that consider channel effects from the and perspectives. Ding-Zhu and Hwang provide in @cite_36 an overview; more specific approaches can be found on superimposed codes for either disjunct or separable purposes @cite_25 @cite_17 @cite_22 @cite_6 @cite_28 , on selective families @cite_30 , on the broadcasting problem @cite_31 , and for other methods @cite_16 @cite_26 @cite_6 . It should be noted that recently, group testing has been reformulated using an information theoretic framework to study the limits of restoration of the IDs of all active nodes over Boolean multiple access channels @cite_27 . We address in this paper the transmission of partition information (rather than identification information) over the channel, and it is thus, different from existing work.
|
{
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_22",
"@cite_36",
"@cite_28",
"@cite_6",
"@cite_27",
"@cite_31",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"2066321007",
"",
"",
"1544735579",
"",
"",
"2101627389",
"1965299092",
"2157120295",
"1968160142",
""
],
"abstract": [
"A selection problem is among the basic communication primitives in networks. In this problem at most k participating stations have to broadcast successfully their messages. This problem is especially important in packet radio networks, where simultaneous transmissions of many neighbors result in interference among delivered messages. This work focuses on a single-hop radio networks with n-stations, also called a multiple access channel, and considers both static and dynamic versions of the selection problem. We construct a family of efficient oblivious deterministic protocols based on selectors, one of them with selection time cO(klog(n k)), and the second explicit construction with selection time cO(k polylog n). The first construction matches the lower bound Ω(klog(n k)) on deterministic oblivious selection, while the second one is the first known explicit construction better than Θ(k2). In the dynamic case we introduce the model of dynamic requests, called k-streams, which generalizes the static model and the dynamic requests with at most k participants. We prove that each oblivious deterministic protocol has latency Ω(k2 log k), and on the other hand we prove the existence of the oblivious deterministic protocol with latency cO(k2log n). In view of the existence of the randomized oblivious protocol with expected latency cO(klog(n k)), this shows that randomization is substantially better than determinism for dynamic setting. Selection problem can be applied to implement other communication primitives --- we demonstrate it in the example of broadcast problem in multi-hop ad-hoc radio networks. In particular, we design an adaptive deterministic protocol broadcasting in time cO(nlog D) in every D-hop radio network of n-stations.",
"",
"",
"Group testing was first proposed for blood tests, but soon found its way to many industrial applications. Combinatorial group testing studies the combinatorial aspect of the problem and is particularly related to many topics in combinatorics, computer science and operations research. Recently, the idea of combinatorial group testing has been applied to experimental designs, coding, multiaccess computer communication, done library screening and other fields. This book attempts to cover the theory and applications of combinatorial group testing in one place.",
"",
"",
"The fundamental task of group testing is to recover a small distinguished subset of items from a large population while efficiently reducing the total number of tests (measurements). The key contribution of this paper is in adopting a new information-theoretic perspective on group testing problems. We formulate the group testing problem as a channel coding decoding problem and derive a single-letter characterization for the total number of tests used to identify the defective set. Although the focus of this paper is primarily on group testing, our main result is generally applicable to other compressive sensing models.",
"Selective families, a weaker variant of superimposed codes [KS64, F92, 197, CR96], have been recently used to design Deterministic Distributed Broadcast (DDB) protocols for unknown radio networks (a radio network is said to be unknown when the nodes know nothing about the network but their own label) [CGGPR00, CGOR00]. We first provide a general almost tight lower bound on the size of selective families. Then, by reverting the selective families - DDB protocols connection, we exploit our lower bound to construct a family of “hard” radio networks (i.e. directed graphs). These networks yield an O(n log D) lower bound on the completion time of DDB protocols that is superlinear (in the size n of the network) even for very small maximum eccentricity D of the network, while all the previous lower bounds (e.g. O(D log n) [CGGPR00]) are superlinear only when D is almost linear. On the other hand, the previous upper bounds are all superlinear in n independently of the eccentricity D and the maximum in-degree d of the network. We introduce a broadcast technique that exploits selective families in a new way. Then, by combining selective families of almost optimal size with our new broadcast technique, we obtain an O(Dd log3 n) upper bound that we prove to be almost optimal when d = O(n D). This exponentially improves over the best known upper bound [CGR00) when D, d = O(polylogn). Furthermore, by comparing our deterministic upper bound with the best known randomized one [BGI87] we obtain a new, rather surprising insight into the real gap between deterministic and randomized protocols. It turns out that this gap is exponential (as discovered in [BGI87]), but only when the network has large maximum in-degree (i.e. d = O(na), for some constant a > O). We then look at the multibroadcast problem on unknown radio networks. A similar connection to that between selective families and (single) broadcast also holds between superimposed codes and multibroadcast. We in fact combine a variant of our (single) broadcast technique with superimposed codes of almost optimal size available in literature [EFF85, HS87, I97, CHI99]. This yields a multibroadcast protocol having completion time O(Dd2 log3 n). Finally, in order to determine the limits of our multibroadcast technique, we generalize (and improve) the best known lower bound [CR96] on the size of superimposed codes.",
"The tree algorithm, a new basic protocol for accessing a satellite channel by a large number of independent users, is considered. Previous multi-accessing techniques suffered from long delays, low throughput and or congestion instabilities. The ALOHA algorithm, for example, when applied to the Poisson source model, has a maximum throughput of .37 packets slot and is unstable. The tree protocol, under similar conditions, is stable, has a maximum average throughput of .43 and has respectable delay properties. In this protocol, the sources are assigned to the leaves of a tree graph. Message contentions are resolved by systematically moving from node to node through the tree, trying to determine the branches containing the conflicting users. It is shown that the tree protocol is a generalization of TDMA and that an optimum dynamic tree adaptively changes from an essentially random access protocol in light traffic to TDMA in heavy traffic. A major result is that if q . the probability that a user has a packet to transmit, is greater than 1 2 , then TDMA and the optimum tree protocol axe the same. If, on the other hand, q , then the optimum tree protocol is more efficient than TDMA. The maximum average throughput of a system consisting of the optimum tree protocol and a finite number of users is one packet slot. The tree protocol may be applied either to a direct access system or to a demand assignment access system. Consequently, an example is presented comparing the average delay properties of tree and TDMA protocols when applied to direct access and to reservation access systems.",
"A binary superimposed code consists of a set of code words whose digit-by-digit Boolean sums (1 + 1 = 1) enjoy a prescribed level of distinguishability. These codes find their main application in the representation of document attributes within an information retrieval system, but might also be used as a basis for channel assignments to relieve congestion in crowded communications bands. In this paper some basic properties of nonrandom codes of this family are presented, and formulas and bounds relating the principal code parameters are derived. Finally, there are described several such code families based upon (1) q -nary conventional error-correcting codes, (2) combinatorial arrangements, such as block designs and Latin squares, (3) a graphical construction, and (4) the parity-check matrices of standard binary error-correcting codes.",
""
]
}
|
1404.0180
|
2953119206
|
This book chapter introduces the use of Continuous Time Markov Networks (CTMN) to analytically capture the operation of Carrier Sense Multiple Access with Collision Avoidance (CSMA CA) networks. It is of tutorial nature, and it aims to be an introduction on this topic, providing a clear and easy-to-follow description. To illustrate how CTMN can be used, we introduce a set of representative and cutting-edge scenarios, such as Vehicular Ad-hoc Networks (VANETs), Power Line Communication networks and multiple overlapping Wireless Local Area Networks (WLANs). For each scenario, we describe the specific CTMN, obtain its stationary distribution and compute the throughput achieved by each node in the network. Taking the per-node throughput as reference, we discuss how the complex interactions between nodes using CSMA CA have an impact on system performance.
|
The use of CTMN models for the analysis of CSMA CA networks was originally developed in @cite_8 and further extended in the context of IEEE 802.11 networks in @cite_5 @cite_2 @cite_6 @cite_1 , among others. Although the modeling of the IEEE 802.11 backoff mechanism is less detailed than in the work of Bianchi @cite_12 , it offers greater versatility in modeling a broad range of topologies. Moreover, experimental results in @cite_6 @cite_15 demonstrate that CTMN models, while idealized, provide remarkably accurate throughput estimates for actual IEEE 802.11 systems.
|
{
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_12"
],
"mid": [
"2029650413",
"2002627759",
"2032566227",
"2107505399",
"2130986870",
"",
"233825754"
],
"abstract": [
"In this paper, we use a Markov model to develop a product form solution to efficiently analyze the throughput of arbitrary topology multihop packet radio networks that employ a carrier sensing multiple access (CSMA) protocol with perfect capture. We consider both exponential and nonexponential packet length distributions. Our method preserves the dependence between nodes, characteristic of CSMA, and determines the joint probability that nodes are transmitting. The product form analysis provides the basis for an automated algorithm that determines the maximum throughput in networks of size up to 100 radio nodes. Numerical examples for several networks are presented. This model has led to many theoretical and practical extensions. These include determination of conditions for product form analysis to hold, extension to other access protocols, and consideration of acknowledgments.",
"Due to a poor understanding of the interactions among transmitters, wireless multihop networks have commonly been stigmatized as unpredictable in nature. Even elementary questions regarding the throughput limitations of these networks cannot be answered in general. In this paper we investigate the behavior of wireless multihop networks using carrier sense multiple access with collision avoidance (CSMA CA). Our goal is to understand how the transmissions of a particular node affect the medium access, and ultimately the throughput, of other nodes in the network. We introduce a theory which accurately models the behavior of these networks and show that, contrary to popular belief, their performance is easily predictable and can be described by a system of equations. Using the proposed theory, we provide the analytical expressions necessary to fully characterize the capacity region of any wireless CSMA CA multihop network. We show that this region is nonconvex in general and entirely agnostic to the probability distributions of all network parameters, depending only on their expected values.",
"We present a novel modeling approach to derive closed-form throughput expressions for CSMA networks with hidden terminals. The key modeling principle is to break the interdependence of events in a wireless network using conditional expressions that capture the effect of a specific factor each, yet preserve the required dependences when combined together. Different from existing models that use numerical aggregation techniques, our approach is the first to jointly characterize the three main critical factors affecting flow throughput (referred to as hidden terminals, information asymmetry and flow-in-the-middle) within a single analytical expression. We have developed a symbolic implementation of the model, that we use for validation against realistic simulations and experiments with real wireless hardware, observing high model accuracy in the evaluated scenarios. The derived closed-form expressions enable new analytical studies of capacity and protocol performance that would not be possible with prior models. We illustrate this through an application of network utility maximization in complex networks with collisions, hidden terminals, asymmetric interference and flow-in-the-middle instances. Despite that such problematic scenarios make utility maximization a challenging problem, the model-based optimization yields vast fairness gains and an average per-flow throughput gain higher than 500 with respect to 802.11 in the evaluated networks.",
"In multi-hop ad hoc networks, the efficiency of a medium access control protocol under heavy traffic load depends mainly on its ability to schedule a large number of simultaneous non-interfering transmissions. However, as each node has only a local view of the network, it is difficult to globally synchronize transmission times over the whole network. How does the lack of global coordination affect spatial reuse in multi-hop wireless networks? We show that in a de-centralized network the spatial reuse does not benefit from global clock synchronization. On the contrary, we demonstrate that non-slotted protocols using collision avoidance mechanisms can achieve a higher spatial reuse than the corresponding slotted protocols. By means of a simple backoff mechanism, one can thus favor the spontaneous emergence of spatially dense transmission schedules.",
"In this paper, we consider the throughput modelling and fairness provisioning in CSMA CA based ad-hoc networks. The main contributions are: firstly, a throughput model based on Markovian analysis is proposed for the CSMA CA network with a general topology. Simulation investigations are presented to verify its performance. Secondly, fairness issues in CSMA CA networks are discussed based on the throughput model. The origin of unfairness is explained and the trade-off between throughput and fairness is illustrated. Thirdly, throughput approximations based on local topology information are proposed and their performances are investigated. Fourthly, three different fairness metrics are presented and their distributed implementations, based on the throughput approximation, are proposed.",
"",
""
]
}
|
1404.0103
|
1543076593
|
We are concerned with an appropriate mathematical measure of resilience in the face of targeted node attacks for arbitrary degree networks, and subsequently comparing the resilience of different scale-free network models with the proposed measure. We strongly motivate our resilience measure termed (VAT), which is denoted mathematically as @math , where @math is the largest connected component in @math . We attempt a thorough comparison of VAT with several existing resilience notions: conductance, vertex expansion, integrity, toughness, tenacity and scattering number. Our comparisons indicate that for artbitrary degree distributions VAT is the only measure that fully captures both the major of a network and the resulting upon targeted node attacks (both captured in a manner proportional to the size of the attack set). For the case of @math -regular graphs, we prove that @math , where @math is the conductance of the graph @math . Conductance and expansion are well-studied measures of robustness and bottlenecks in the case of regular graphs but fail to capture resilience in the case of highly heterogeneous degree graphs. Regarding comparison of different scale-free graph models, our experimental results indicate that PLOD graphs with degree distributions identical to BA graphs of the same size exhibit consistently better vertex attack tolerance than the BA type graphs, although both graph types appear asymptotically resilient for BA generative parameter @math . BA graphs with @math also appear to lack resilience, not only exhibiting very low VAT values, but also great transparency in the identification of the vulnerable node sets, namely the hubs, consistent with well known previous work.
|
It is important to note, of course, that conductance (in its various normalized and non-normalized forms) is a fundamental property of graphs independent of its applicability to edge-based resilience or resilience in general. Conductance is intimately related to both the mixing times of random walks on graphs @cite_21 and the eigenvalues of the graph's adjacency matrix (or normalized adjacency matrix as befits the situation) @cite_4 . Therefore, one very important application of conductance has involved the analysis of randomized algorithms, particularly for hard counting problems, yielding an important place for the measure in theoretical computer science. Many of the Markov chains considered in such contexts are regular or almost-regular, and it is relatively recently that the heterogeneity of many actual networks has become established across many fields, the relevance of such degree distributions first being established not by theoretical computer scientists but by in any case.
|
{
"cite_N": [
"@cite_21",
"@cite_4"
],
"mid": [
"2072211488",
"1578099820"
],
"abstract": [
"Abstract The paper studies effective approximate solutions to combinatorial counting and unform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for self-reducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for self-reducible structures, polynomial time randomised algorithms for counting to within factors of the form (1 + n − β ) are available either for all β ϵ R or for no β ϵ R . A substantial part of the paper is devoted to investigating the rate of convergence of finite ergodic Markov chains, and a simple but powerful characterisation of rapid convergence for a broad class of chains based on a structural property of the underlying graph is established. Finally, the general techniques of the paper are used to derive an almost uniform generation procedure for labelled graphs with a given degree sequence which is valid over a much wider range of degrees than previous methods: this in turn leads to randomised approximate counting algorithms for these graphs with very good asymptotic behaviour.",
"Eigenvalues and the Laplacian of a graph Isoperimetric problems Diameters and eigenvalues Paths, flows, and routing Eigenvalues and quasi-randomness Expanders and explicit constructions Eigenvalues of symmetrical graphs Eigenvalues of subgraphs with boundary conditions Harnack inequalities Heat kernels Sobolev inequalities Advanced techniques for random walks on graphs Bibliography Index."
]
}
|
1403.8006
|
2402246493
|
With rapidly evolving technology, multicore and manycore processors have emerged as promising architectures to benefit from increasing transistor numbers. The transition towards these parallel architectures makes today an exciting time to investigate challenges in parallel computing. The TILEPro64 is a manycore accelerator, composed of 64 tiles interconnected via multiple 8x8 mesh networks. It contains per-tile caches and supports cache-coherent shared memory by default. In this paper we present a programming technique to take advantages of distributed caching facilities in manycore processors. However, unlike other work in this area, our approach does not use architecture-specific libraries. Instead, we provide the programmer with a novel technique on how to program future Non-Uniform Cache Architecture (NUCA) manycore systems, bearing in mind their caching organisation. We show that our localised programming approach can result in a significant improvement of the parallelisation efficiency (speed-up).
|
Data locality in NUCA designs is discussed in @cite_3 . The authors propose an on-line prediction strategy which decides whether to perform a remote access (as in traditional NUCA designs) or to migrate a thread at the instruction level. In @cite_6 , the NUCA characterisation of the TILEPro64 is explored. Based on this characterisation, a home cache aware task scheduling is developed to distribute task data on home caches. Although one of the aims of this work is to provide simple interfaces, still its memory allocation policy is a wrapper around the architecture-specific API, which makes the code dependent to the platform. A similar work to ours on sorting but with different methods and purposes is performed on the TILEPro64 @cite_0 . They have targeted throughput and power efficiency of the radix sort algorithm employing fine-grained control and various optimisation techniques offered by the Tilera Multicore Components (TMC) API. The idea of the conventional recursive parallel merge sort in OpenMP is borrowed from @cite_7 . In our previous work @cite_4 , we have shown how this algorithm scales on the TILEPro64 compared to the theoretical model.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_3",
"@cite_0"
],
"mid": [
"",
"286187423",
"2183899676",
"2227841212",
"2019325618"
],
"abstract": [
"",
"While merge sort is well-understood in parallel algorithms theory, relatively little is known of how to implement parallel merge sort with mainstream parallel programming platforms, such as OpenMP and MPI, and run it on mainstream SMP-based systems, such as multi-core computers and multi-core clusters. This is misfortunate because merge sort is not only a fast and stable sort algorithm, but it is also an easy to understand and popular representative of the rich class of divide-and-conquer methods; hence better understanding of merge sort parallelization can contribute to better understanding of divide-and-conquer parallelization in general. In this paper, we investigate three parallel merge-sorts: shared memory merge sort that runs on SMP systems with OpenMP; message-passing merge sort that runs on computer clusters with MPI; and combined hybrid merge sort, with both OpenMP and MPI, that runs on clustered SMPs. We have experimented with our parallel merge sorts on a dedicated Rocks SMP cluster and on a virtual SMP luster in the Amazon Elastic Compute Cloud. In our experiments, shared memory merge sort with OpenMP has achieved best speedup. We believe that we are the first ones to concurrently experiment with - and compare - shared memory, message passing, and hybrid merge sort. Our results can help in the parallelization of specific practical merge sort routines and, even more important, in the practical parallelization of other divide-and-conquer algorithms for mainstream SMP-based systems.",
"Modern manycore processors feature a highly scalable and software-configurable cache hierarchy. For performance, manycore programmers will not only have to efficiently utilize the large number of cores but also understand and configure the cache hierarchy to suit the application. Relief from this manycore programming nightmare can be provided by task-based programming models where programmers parallelize using tasks and an architecture-specific runtime system maps tasks to cores and in addition configures the cache hierarchy. In this paper, we focus on the cache hierarchy of the Tilera TILEPro64 processor which features a software-configurable coherence waypoint called the home cache. We first show the runtime system performance bottleneck of scheduling tasks oblivious to the nature of home caches. We then demonstrate a technique in which the runtime system controls the assignment of home caches to memory blocks and schedules tasks to minimize home cache access penalties. Test results of our technique have shown a significant execution time performance improvement on selected benchmarks leading to the conclusion that by taking processor architecture features into account, task-based programming models can indeed provide continued performance and allow programmers to smoothly transit from the multicore to manycore era.",
"Chip-multiprocessors (CMPs) have become the mainstream chip design in recent years; for scalability reasons, designs with high core counts tend towards tiled CMPs with physically distributed shared caches. This naturally leads to a Non-Uniform Cache Architecture (NUCA) design, where onchip access latencies depend on the physical distances between requesting cores and home cores where the data is cached. Improving data locality is thus key to performance, and several studies have addressed this problem using data replication and data migration. In this paper, we consider another mechanism, hardwarelevel thread migration. This approach, we argue, can better exploit shared data locality for NUCA designs by effectively replacing multiple round-trip remote cache accesses with a smaller number of migrations. High migration costs, however, make it crucial to use thread migrations judiciously; we therefore propose a novel, on-line prediction scheme which decides whether to perform a remote access (as in traditional NUCA designs) or to perform a thread migration at the instruction level. For a set of parallel benchmarks, our thread migration predictor improves the performance by 18 on average and at best by 2.3X over the standard NUCA design that only uses remote accesses.",
"We present an efficient implementation of the radix sort algorithm for the Tilera TILEPro64 processor. The TILEPro64 is one of the first successful commercial manycore processors. It is composed of 64 tiles interconnected through multiple fast Networks-on-chip and features a fully coherent, shared distributed cache. The architecture has a large degree of flexibility, and allows various optimization strategies. We describe how we mapped the algorithm to this architecture. We present an in-depth analysis of the optimizations for each phase of the algorithm with respect to the processor's sustained performance. We discuss the overall throughput reached by our radix sort implementation (up to 132 MK s) and show that it provides comparable or better performance-per-watt with respect to state-of-the art implementations on x86 processors and graphic processing units."
]
}
|
1403.8034
|
2951988735
|
Social media allow for an unprecedented amount of interaction between people online. A fundamental aspect of human social behavior, however, is the tendency of people to associate themselves with like-minded individuals, forming homogeneous social circles both online and offline. In this work, we apply a new model that allows us to distinguish between social ties of varying strength, and to observe evidence of homophily with regards to politics, music, health, residential sector & year in college, within the online and offline social network of 74 college students. We present a multiplex network approach to social tie strength, here applied to mobile communication data - calls, text messages, and co-location, allowing us to dimensionally identify relationships by considering the number of communication channels utilized between students. We find that strong social ties are characterized by maximal use of communication channels, while weak ties by minimal use. We are able to identify 75 of close friendships, 90 of weaker ties, and 90 of Facebook friendships as compared to reported ground truth. We then show that stronger ties exhibit greater profile similarity than weaker ones. Apart from high homogeneity in social circles with respect to political and health aspects, we observe strong homophily driven by music, residential sector and year in college. Despite Facebook friendship being highly dependent on residence and year, exposure to less homogeneous content can be found in the online rather than the offline social circles of students, most notably in political and music aspects.
|
Multiplexity has been explored in a wide range of systems from global air-transportation @cite_22 to massive online multiplayer games @cite_10 . A comprehensive review of multiplex (multilayer) network models can be found in @cite_2 . While multiplexity in most systems creates additional complexity such as layer interdependence, in social networks multiplexity can be used to define the strength of a tie.
|
{
"cite_N": [
"@cite_10",
"@cite_22",
"@cite_2"
],
"mid": [
"2072701859",
"2066132582",
""
],
"abstract": [
"The capacity to collect fingerprints of individuals in online media has revolutionized the way researchers explore human society. Social systems can be seen as a nonlinear superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. Much emphasis has been put on the network topology of social interactions, however, the multidimensional nature of these interactions has largely been ignored, mostly because of lack of data. Here, for the first time, we analyze a complete, multirelational, large social network of a society consisting of the 300,000 odd players of a massive multiplayer online game. We extract networks of six different types of one-to-one interactions between the players. Three of them carry a positive connotation (friendship, communication, trade), three a negative (enmity, armed aggression, punishment). We first analyze these types of networks as separate entities and find that negative interactions differ from positive interactions by their lower reciprocity, weaker clustering, and fatter-tail degree distribution. We then explore how the interdependence of different network types determines the organization of the social system. In particular, we study correlations and overlap between different types of links and demonstrate the tendency of individuals to play different roles in different networks. As a demonstration of the power of the approach, we present the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations.",
"We study the dynamics of the European Air Transport Network by using a multiplex network formalism. We will consider the set of flights of each airline as an interdependent network and we analyze the resilience of the system against random flight failures in the passenger’s rescheduling problem. A comparison between the single-plex approach and the corresponding multiplex one is presented illustrating that the multiplexity strongly affects the robustness of the European Air Network.",
""
]
}
|
1403.7791
|
2952211581
|
In this paper we present the design and implementation of POSH, an Open-Source implementation of the OpenSHMEM standard. We present a model for its communications, and prove some properties on the memory model defined in the OpenSHMEM specification. We present some performance measurements of the communication library featured by POSH and compare them with an existing one-sided communication library. POSH can be downloaded from this http URL . 9 - 67
|
Some implementations also exist for high-performance RDMA networks: Portals has been working on a specific support for OpenSHMEM by their communication library @cite_8 . Some other implementations are built on top of MPI implementations over RDMA networks, such as @cite_5 for Quadrics networks or @cite_12 over InfiniBand networks.
|
{
"cite_N": [
"@cite_5",
"@cite_12",
"@cite_8"
],
"mid": [
"2138907145",
"",
"2169240096"
],
"abstract": [
"Previous implementations of MPICH using the CraySHMEM interface existed for the Cray T3 series of machines, but these implementations were abandoned after the T3 series was discontinued. However, support for the Cray SHMEM programming interface has continued on other platforms, including commodity clusters built using the Quadrics QsNet network. In this paper, we describe a design for MPI that overcomes some of the limitations of the previous implementations. We compare the performance of the SHMEM MPI implementation with the native implementation for Quadrics QsNet. Results show that our implementation is faster for certain message sizes for some micro-benchmarks.",
"",
"The looming challenges of exascale computing are generating renewed interest in the Partitioned Global Address Space (PGAS) parallel programming model. From a high-performance interconnect perspective, the one-sided communication model in PGAS addresses some of the challenges that could inhibit the viability of traditional two-sided MPI message passing on millions of cores. The OpenSHMEM project is a recent effort at standardizing a lightweight one-sided communication interface to enable the development of highly scalable PGAS applications. The Portals data movement layer is a low-level one-sided interface that has successfully supported several higher-level one-sided and two-sided interfaces, including PGAS network transport layers. This paper describes the design and implementation of OpenSHMEM on the latest generation of the Portals interface, including a description of how Portals has evolved to maximize performance for PGAS-style communication operations."
]
}
|
1403.7591
|
2112856309
|
Concept-based video representation has proven to be eective in complex event detection. However, existing methods either manually design concepts or directly adopt concept libraries not specically designed for events. In this paper, we propose to build Concept Bank, the largest concept library consisting of 4; 876 concepts specically designed to cover 631 real-world events. To construct the Concept Bank, we rst gather a comprehensive event collection from WikiHow, a collaborative writing project that aims to build the world’s largest manual for any possible How-To event. For each event, we then search Flickr and discover relevant concepts from the tags of the returned images. We train a Multiple Kernel Linear SVM for each discovered concept as a concept detector in Concept Bank. We organize the concepts into a ve-layer tree structure, in which the higher-level nodes correspond to the event categories while the leaf nodes are the event-specic concepts discovered for each event. Based on such tree ontology, we develop a semantic matching method to select relevant concepts for each textual event query, and then apply the corresponding concept detectors to generate concept-based video representations. We use TRECVID Multimedia Event Detection 2013 and Columbia Consumer Video open source event denitions and videos as our test sets and show very promising results on two video event detection tasks: event modeling over concept space and zero-shot event retrieval. To the best of our knowledge, this is the largest concept library covering the largest number of real-world events.
|
Video event detection has became an important research area in computer vision literature. Natarajan @cite_5 investigated the multimodal fusion of low-level video features and the extracted videotext information in video content, and achieved good performance on the detection. Ye @cite_7 discovered the bi-modal audio-visual codewords and leveraged the joint patterns across the audio and visual space to boost event detection performance. Duan @cite_22 incorporated the web sources videos crawled from Youtube to relieve the insufficiency of the number of training videos of an event, and developed a cross-domain video event detection model. Tang @cite_23 developed a large margin framework to exploit the latent temporal structure in successive clips of a long event video. These excellent works focus on modeling events into sophisticated statistical models or fusing mutimodal information. However, none of them can reveal the rich semantics in event videos.
|
{
"cite_N": [
"@cite_5",
"@cite_23",
"@cite_22",
"@cite_7"
],
"mid": [
"2141939040",
"2142258645",
"2160039895",
"2083270451"
],
"abstract": [
"Combining multiple low-level visual features is a proven and effective strategy for a range of computer vision tasks. However, limited attention has been paid to combining such features with information from other modalities, such as audio and videotext, for large scale analysis of web videos. In our work, we rigorously analyze and combine a large set of low-level features that capture appearance, color, motion, audio and audio-visual co-occurrence patterns in videos. We also evaluate the utility of high-level (i.e., semantic) visual information obtained from detecting scene, object, and action concepts. Further, we exploit multimodal information by analyzing available spoken and videotext content using state-of-the-art automatic speech recognition (ASR) and videotext recognition systems. We combine these diverse features using a two-step strategy employing multiple kernel learning (MKL) and late score level fusion methods. Based on the TRECVID MED 2011 evaluations for detecting 10 events in a large benchmark set of ∼45000 videos, our system showed the best performance among the 19 international teams.",
"In this paper, we tackle the problem of understanding the temporal structure of complex events in highly varying videos obtained from the Internet. Towards this goal, we utilize a conditional model trained in a max-margin framework that is able to automatically discover discriminative and interesting segments of video, while simultaneously achieving competitive accuracies on difficult detection and recognition tasks. We introduce latent variables over the frames of a video, and allow our algorithm to discover and assign sequences of states that are most discriminative for the event. Our model is based on the variable-duration hidden Markov model, and models durations of states in addition to the transitions between states. The simplicity of our model allows us to perform fast, exact inference using dynamic programming, which is extremely important when we set our sights on being able to process a very large number of videos quickly and efficiently. We show promising results on the Olympic Sports dataset [16] and the 2011 TRECVID Multimedia Event Detection task [18]. We also illustrate and visualize the semantic understanding capabilities of our model.",
"Cross-domain learning methods have shown promising results by leveraging labeled patterns from auxiliary domains to learn a robust classifier for target domain, which has a limited number of labeled samples. To cope with the tremendous change of feature distribution between different domains in video concept detection, we propose a new cross-domain kernel learning method. Our method, referred to as Domain Transfer SVM (DTSVM), simultaneously learns a kernel function and a robust SVM classifier by minimizing both the structural risk functional of SVM and the distribution mismatch of labeled and unlabeled samples between the auxiliary and target domains. Comprehensive experiments on the challenging TRECVID corpus demonstrate that DTSVM outperforms existing cross-domain learning and multiple kernel learning methods.",
"Joint audio-visual patterns often exist in videos and provide strong multi-modal cues for detecting multimedia events. However, conventional methods generally fuse the visual and audio information only at a superficial level, without adequately exploring deep intrinsic joint patterns. In this paper, we propose a joint audio-visual bi-modal representation, called bi-modal words. We first build a bipartite graph to model relation across the quantized words extracted from the visual and audio modalities. Partitioning over the bipartite graph is then applied to construct the bi-modal words that reveal the joint patterns across modalities. Finally, different pooling strategies are employed to re-quantize the visual and audio words into the bi-modal words and form bi-modal Bag-of-Words representations that are fed to subsequent multimedia event classifiers. We experimentally show that the proposed multi-modal feature achieves statistically significant performance gains over methods using individual visual and audio features alone and alternative multi-modal fusion methods. Moreover, we found that average pooling is the most suitable strategy for bi-modal feature generation."
]
}
|
1403.7591
|
2112856309
|
Concept-based video representation has proven to be eective in complex event detection. However, existing methods either manually design concepts or directly adopt concept libraries not specically designed for events. In this paper, we propose to build Concept Bank, the largest concept library consisting of 4; 876 concepts specically designed to cover 631 real-world events. To construct the Concept Bank, we rst gather a comprehensive event collection from WikiHow, a collaborative writing project that aims to build the world’s largest manual for any possible How-To event. For each event, we then search Flickr and discover relevant concepts from the tags of the returned images. We train a Multiple Kernel Linear SVM for each discovered concept as a concept detector in Concept Bank. We organize the concepts into a ve-layer tree structure, in which the higher-level nodes correspond to the event categories while the leaf nodes are the event-specic concepts discovered for each event. Based on such tree ontology, we develop a semantic matching method to select relevant concepts for each textual event query, and then apply the corresponding concept detectors to generate concept-based video representations. We use TRECVID Multimedia Event Detection 2013 and Columbia Consumer Video open source event denitions and videos as our test sets and show very promising results on two video event detection tasks: event modeling over concept space and zero-shot event retrieval. To the best of our knowledge, this is the largest concept library covering the largest number of real-world events.
|
There are some recent works that try to perform event detection with semantic concepts. Yang @cite_18 applied deep belief nets to group a large number of event video shots into a number of shot clusters, and then treat each cluster center as a data-driven concept. Then each video is mapped onto the cluster centers and encoded into a concept based representation. However, such data-driven concepts do not convey any semantic information and hence cannot be utilized for high-level semantic understanding. Liu @cite_1 manually defined concepts present in event videos and categorized them into object'', scene'' and action''. They annotated the presence of each concept on training videos and then built the individual concept detectors. Nevertheless, as aforementioned, this requires too many manual efforts, and is not applicable for real-world large-scale video event detection tasks. Different from these prior works, we focus on automatically discovering potential concepts present in any possible events with interpretable semantic meaning.
|
{
"cite_N": [
"@cite_18",
"@cite_1"
],
"mid": [
"2119246739",
"2002657139"
],
"abstract": [
"Automatic event detection in a large collection of unconstrained videos is a challenging and important task. The key issue is to describe long complex video with high level semantic descriptors, which should find the regularity of events in the same category while distinguish those from different categories. This paper proposes a novel unsupervised approach to discover data-driven concepts from multi-modality signals (audio, scene and motion) to describe high level semantics of videos. Our methods consists of three main components: we first learn the low-level features separately from three modalities. Secondly we discover the data-driven concepts based on the statistics of learned features mapped to a low dimensional space using deep belief nets (DBNs). Finally, a compact and robust sparse representation is learned to jointly model the concepts from all three modalities. Extensive experimental results on large in-the-wild dataset show that our proposed method significantly outperforms state-of-the-art methods.",
"We propose to use action, scene and object concepts as semantic attributes for classification of video events in InTheWild content, such as YouTube videos. We model events using a variety of complementary semantic attribute features developed in a semantic concept space. Our contribution is to systematically demonstrate the advantages of this concept-based event representation (CBER) in applications of video event classification and understanding. Specifically, CBER has better generalization capability, which enables to recognize events with a few training examples. In addition, CBER makes it possible to recognize a novel event without training examples (i.e., zero-shot learning). We further show our proposed enhanced event model can further improve the zero-shot learning. Furthermore, CBER provides a straightforward way for event recounting understanding. We use the TRECVID Multimedia Event Detection (MED11) open source event definitions and datasets as our test bed and show results on over 1400 hours of videos."
]
}
|
1403.7591
|
2112856309
|
Concept-based video representation has proven to be eective in complex event detection. However, existing methods either manually design concepts or directly adopt concept libraries not specically designed for events. In this paper, we propose to build Concept Bank, the largest concept library consisting of 4; 876 concepts specically designed to cover 631 real-world events. To construct the Concept Bank, we rst gather a comprehensive event collection from WikiHow, a collaborative writing project that aims to build the world’s largest manual for any possible How-To event. For each event, we then search Flickr and discover relevant concepts from the tags of the returned images. We train a Multiple Kernel Linear SVM for each discovered concept as a concept detector in Concept Bank. We organize the concepts into a ve-layer tree structure, in which the higher-level nodes correspond to the event categories while the leaf nodes are the event-specic concepts discovered for each event. Based on such tree ontology, we develop a semantic matching method to select relevant concepts for each textual event query, and then apply the corresponding concept detectors to generate concept-based video representations. We use TRECVID Multimedia Event Detection 2013 and Columbia Consumer Video open source event denitions and videos as our test sets and show very promising results on two video event detection tasks: event modeling over concept space and zero-shot event retrieval. To the best of our knowledge, this is the largest concept library covering the largest number of real-world events.
|
Notably, we applied our previous work on concept discovery from Internet images @cite_30 as a building component of our concept library construction process. This previous work aims at discovering concepts from a pre-specified set of event definitions within a known domain and studying the large beneficial impacts of such event-specific concepts in event retrieval, zero-example detection, and summarization. In contrast, this paper focuses on discovering semantic concepts from an external event knowledge base, WikiHow, and using its rich ontological hierarchy to organize the large number of concepts learned from the Web. Such ontological structure plays an important role in handling novel events that have not been seen in the learning stage, as confirmed by the significant performance gains reported in the experiments.
|
{
"cite_N": [
"@cite_30"
],
"mid": [
"2048261712"
],
"abstract": [
"Analysis and detection of complex events in videos require a semantic representation of the video content. Existing video semantic representation methods typically require users to pre-define an exhaustive concept lexicon and manually annotate the presence of the concepts in each video, which is infeasible for real-world video event detection problems. In this paper, we propose an automatic semantic concept discovery scheme by exploiting Internet images and their associated tags. Given a target event and its textual descriptions, we crawl a collection of images and their associated tags by performing text based image search using the noun and verb pairs extracted from the event textual descriptions. The system first identifies the candidate concepts for an event by measuring whether a tag is a meaningful word and visually detectable. Then a concept visual model is built for each candidate concept using a SVM classifier with probabilistic output. Finally, the concept models are applied to generate concept based video representations. We use the TRECVID Multimedia Event Detection (MED) 2013 as our video test set and crawl 400K Flickr images to automatically discover 2, 000 visual concepts. We show significant performance gains of the proposed concept discovery method over different video event detection tasks including supervised event modeling over concept space and semantic based zero-shot retrieval without training examples. Importantly, we show the proposed method of automatic concept discovery outperforms other well-known concept library construction approaches such as Classemes and ImageNet by a large margin (228 ) in zero-shot event retrieval. Finally, subjective evaluation by humans also confirms clear superiority of the proposed method in discovering concepts for event representation."
]
}
|
1403.7591
|
2112856309
|
Concept-based video representation has proven to be eective in complex event detection. However, existing methods either manually design concepts or directly adopt concept libraries not specically designed for events. In this paper, we propose to build Concept Bank, the largest concept library consisting of 4; 876 concepts specically designed to cover 631 real-world events. To construct the Concept Bank, we rst gather a comprehensive event collection from WikiHow, a collaborative writing project that aims to build the world’s largest manual for any possible How-To event. For each event, we then search Flickr and discover relevant concepts from the tags of the returned images. We train a Multiple Kernel Linear SVM for each discovered concept as a concept detector in Concept Bank. We organize the concepts into a ve-layer tree structure, in which the higher-level nodes correspond to the event categories while the leaf nodes are the event-specic concepts discovered for each event. Based on such tree ontology, we develop a semantic matching method to select relevant concepts for each textual event query, and then apply the corresponding concept detectors to generate concept-based video representations. We use TRECVID Multimedia Event Detection 2013 and Columbia Consumer Video open source event denitions and videos as our test sets and show very promising results on two video event detection tasks: event modeling over concept space and zero-shot event retrieval. To the best of our knowledge, this is the largest concept library covering the largest number of real-world events.
|
There are some existing concept libraries built for different purposes. Object bank @cite_6 consists of @math objects taken from the intersection set of most frequent @math objects between image datasets LabelMe @cite_21 and ImageNet @cite_17 . Each object detector is trained with @math images and their object bounding boxes. Classemes @cite_20 is a concept library comprised of @math concepts defined from Large Scale Concept Ontology for Multimedia (LSCOM) @cite_9 . Each concept detector is trained with @math images from search engine using the LP- @math multiple kernel learning algorithm. Action bank @cite_3 is built for high-level representation of human activities in video, which contains @math template actions. Although these libraries achieve good performances on different tasks, they are not designed for video events. In our work, we build a concept library specifically designed for video events, and the concept ontology and models are automatically discovered from the Web.
|
{
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_3",
"@cite_20",
"@cite_17"
],
"mid": [
"2089150756",
"2110764733",
"2169177311",
"2063153269",
"2122528955",
"2108598243"
],
"abstract": [
"As increasingly powerful techniques emerge for machine tagging multimedia content, it becomes ever more important to standardize the underlying vocabularies. Doing so provides interoperability and lets the multimedia community focus ongoing research on a well-defined set of semantics. This paper describes a collaborative effort of multimedia researchers, library scientists, and end users to develop a large standardized taxonomy for describing broadcast news video. The large-scale concept ontology for multimedia (LSCOM) is the first of its kind designed to simultaneously optimize utility to facilitate end-user access, cover a large semantic space, make automated extraction feasible, and increase observability in diverse broadcast news video data sets",
"We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web.",
"Robust low-level image features have been proven to be effective representations for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings. For high level visual tasks, such low-level image representations are potentially not enough. In this paper, we propose a high-level image representation, called the Object Bank, where an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors, blind to the testing dataset or visual task. Leveraging on the Object Bank representation, superior performances on high level visual recognition tasks can be achieved with simple off-the-shelf classifiers such as logistic regression and linear SVM. Sparsity algorithms make our representation more efficient and scalable for large scene datasets, and reveal semantically meaningful feature patterns.",
"Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2 on KTH (better by 3.3 ), 95.0 on UCF Sports (better by 3.7 ), 57.9 on UCF50 (baseline is 47.9 ), and 26.9 on HMDB51 (baseline is 23.2 ). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.",
"We introduce a new descriptor for images which allows the construction of efficient and compact classifiers with good accuracy on object category recognition. The descriptor is the output of a large number of weakly trained object category classifiers on the image. The trained categories are selected from an ontology of visual concepts, but the intention is not to encode an explicit decomposition of the scene. Rather, we accept that existing object category classifiers often encode not the category per se but ancillary image characteristics; and that these ancillary characteristics can combine to represent visual classes unrelated to the constituent categories' semantic meanings. The advantage of this descriptor is that it allows object-category queries to be made against image databases using efficient classifiers (efficient at test time) such as linear support vector machines, and allows these queries to be for novel categories. Even when the representation is reduced to 200 bytes per image, classification accuracy on object category recognition is comparable with the state of the art (36 versus 42 ), but at orders of magnitude lower computational cost.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond."
]
}
|
1403.7291
|
2541997769
|
Efficiency in embedded systems is paramount to achieve high performance while consuming less area and power. Processors in embedded systems have to be designed carefully to achieve such design constraints. Application Specific Instruction set Processors (ASIPs) exploit the nature of applications to design an optimal instruction set. Despite being not general to execute any application, ASIPs are highly preferred in the embedded systems industry where the devices are produced to satisfy a certain type of application domain s (either intra-domain or inter-domain). Typically, ASIPs are designed from a base-processor and functionalities are added for applications. This paper studies the multi-application ASIPs and their instruction sets, extensively analysing the instructions for inter-domain and intra-domain designs. Metrics analysed are the reusable instructions and the extra cost to add a certain application. A wide range of applications from various application benchmarks (MiBench, MediaBench and SPEC2006) and domains are analysed for two different architectures (ARM-Thumb and PISA). Our study shows that the intra-domain applications contain larger number of common instructions, whereas the inter-domain applications have very less common instructions, regardless of the architecture (and therefore the ISA).
|
One approach for generating instruction sets is by considering the datapath model. In 1994, @cite_15 have shown how instruction selection for ASIPs can be performed by generating a combined instruction set and datapath model from the instruction set. Operation bundling was performed on the model with an abstract datapath. This methodology still requires refinement and testing. Then Kucukcakar @cite_19 came up with an architecture and a co-design methodology to improve the performance of embedded system applications through instruction-set customisation, based on a similar kind of concept. Although these methodologies improve the performance of ASIPs they failed to consider the design constraints such as area, power consumption, NRE cost etc.
|
{
"cite_N": [
"@cite_19",
"@cite_15"
],
"mid": [
"2139159264",
"2129540432"
],
"abstract": [
"A well-known challenge during processor design is to obtain the best possible results for a typical target application domain that is generally described as a set of benchmarks. Obtaining the best possible result in turn becomes a complex tradeoff between the generality of the processor and the physical characteristics. A custom instruction to perform a task can result in significant improvements for an application, but generally, at the expense of some overhead for all other applications. In the recent years, Application-Specific Instruction-Set Processors (ASIP) have gained popularity in production chips as well as in the research community. In this paper, we present a unique architecture and methodology to design ASIPs in the embedded controller domain by customizing an existing processor instruction set and architecture rather than creating an entirely new ASIP tuned to a benchmark.",
"Application Specific Instruction set Processors (ASIPs) are field or mask programmable processors of which the architecture and instruction set are optimised to a specific application domain. ASIPs offer a high degree of flexibility and are therefore increasingly being used in competitive markets like telecommunications. However, adequate CAD techniques for the design and programming of ASIPs are missing hitherto. An interactive approach for the definition of optimised microinstruction sets of ASIPs is presented. A second issue is a method for instruction selection when generating code for a predefined ASIP. A combined instruction set and data-path model is generated, onto which the application is mapped. >"
]
}
|
1403.7291
|
2541997769
|
Efficiency in embedded systems is paramount to achieve high performance while consuming less area and power. Processors in embedded systems have to be designed carefully to achieve such design constraints. Application Specific Instruction set Processors (ASIPs) exploit the nature of applications to design an optimal instruction set. Despite being not general to execute any application, ASIPs are highly preferred in the embedded systems industry where the devices are produced to satisfy a certain type of application domain s (either intra-domain or inter-domain). Typically, ASIPs are designed from a base-processor and functionalities are added for applications. This paper studies the multi-application ASIPs and their instruction sets, extensively analysing the instructions for inter-domain and intra-domain designs. Metrics analysed are the reusable instructions and the extra cost to add a certain application. A wide range of applications from various application benchmarks (MiBench, MediaBench and SPEC2006) and domains are analysed for two different architectures (ARM-Thumb and PISA). Our study shows that the intra-domain applications contain larger number of common instructions, whereas the inter-domain applications have very less common instructions, regardless of the architecture (and therefore the ISA).
|
One of the early work on methodologies to maximise performance of ASIP under design constraints, such as area, power consumption and NRE cost, is @cite_12 . The authors in @cite_12 proposed a rapid instruction selection approach from their library of pre-designed specific instructions to be mapped on a set of pre-fabricated co-processors functional units . As a result, the authors in @cite_12 were able to significantly increase application performance while satisfying area constraints. This methodology uses a combination of simulation, estimation and a pre-characterised library of instructions, to select the appropriate co-processors and instructions. @cite_9 proposed a new formalisation and an algorithm that considers the functional module sharing. This method allows designers to predict the performance of their designs before implementation, which is an important feature for producing a high quality design in reasonable time. In addition to that, an efficient algorithm for automatic selection of new application-specific instructions under hardware resources constraints is introduced in @cite_18 . The main drawback of this algorithm is the un-optimised Very-High-Speed Integrated circuits Hardware Description Language (VHDL) model.
|
{
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_12"
],
"mid": [
"2150601807",
"2131264876",
"2144981359"
],
"abstract": [
"This paper describes a formal method that selects the instruction set of an ASIP (application specific integrated processor) that maximizes the chip performance under the constraints of chip area and power consumption. Our contribution includes a new formalization and algorithm that considers the functional module sharing in the problem of instruction set optimization. This problem was not addressed in the previous work and considering it leads to an efficient implementation of the selected instructions. The proposed method also enables designers to predict the performance of their designs before implementing them, which is an important feature for producing a high quality design in reasonable time.",
"In this paper, we present a general and an efficient algorithm for automatic selection of new application-specific instructions under hardware resources constraints. The instruction selection is formulated as an ILP problem and efficient solvers can be used for finding the optimal solution. An important feature of our algorithm is that it is not restricted to basic-block level nor does it impose any limitation on the number of the newly added instructions or on the number of the inputs outputs of these instructions. The presented results show that a significant overall application speedup is achieved even for large kernels (for ADPCM decoder the speedup ranges from x1.2 to x3.7) and that our algorithm compares well with other state-of-art algorithms for automatic instruction set extensions.",
"We present a methodology that maximizes the performance of Tensilica based Application Specific Instruction-set Processor (ASIP) through instruction selection when an area constraint is given. Our approach rapidly selects from a set of pre-fabricated coprocessors functional units from our library of pre-designed specific instructions (to evaluate our technology we use the Tensilica platform). As a result, we significantly increase application performance while area constraints are satisfied. Our methodology uses a combination of simulation, estimation and a pre-characterised library of instructions, to select the appropriate co-processors and instructions. We report that by selecting the appropriate coprocessors functional units and specific TIE instructions, the total execution time of complex applications (we study a voice encoder decoder), an application's performance can be reduced by up to 85 compared to the base implementation. Our estimator used in the system takes typically less than a second to estimate, with an average error rate of 4 (as compared to full simulation, which takes 45 minutes). The total selection process using our methodology takes 3-4 hours, while a full design space exploration using simulation would take several days."
]
}
|
1403.7291
|
2541997769
|
Efficiency in embedded systems is paramount to achieve high performance while consuming less area and power. Processors in embedded systems have to be designed carefully to achieve such design constraints. Application Specific Instruction set Processors (ASIPs) exploit the nature of applications to design an optimal instruction set. Despite being not general to execute any application, ASIPs are highly preferred in the embedded systems industry where the devices are produced to satisfy a certain type of application domain s (either intra-domain or inter-domain). Typically, ASIPs are designed from a base-processor and functionalities are added for applications. This paper studies the multi-application ASIPs and their instruction sets, extensively analysing the instructions for inter-domain and intra-domain designs. Metrics analysed are the reusable instructions and the extra cost to add a certain application. A wide range of applications from various application benchmarks (MiBench, MediaBench and SPEC2006) and domains are analysed for two different architectures (ARM-Thumb and PISA). Our study shows that the intra-domain applications contain larger number of common instructions, whereas the inter-domain applications have very less common instructions, regardless of the architecture (and therefore the ISA).
|
Researchers have already proposed automated techniques in ASIP design process to achieve best performance under certain design constraints. @cite_2 presented a complete tool-chain for automated instruction set extension, micro-architecture optimisation and complex instruction selection, based on GCC compiler. Huang and Despain in @cite_20 proposed a single formulation, combining the problem of instruction set design, micro-architecture design and instruction set mapping. The formulation receives as inputs the application, architecture template, objective function and design constraints, and generates as outputs the instruction set for the application. Similarly, @cite_17 presented a design automation approach, referred to as Automatic Synthesis of Instruction-set Architectures (ASIA), to synthesise instruction sets from application benchmarks. The problem of designing instruction sets was formulated as a modified scheduling problem in @cite_17 . In @cite_10 , a design flow was proposed to automatically generate Application-Specific Instructions (ASIs) to improve performance with memory access considerations. The ASIs are selected not only based on the instruction latency but also the memory access.
|
{
"cite_N": [
"@cite_10",
"@cite_17",
"@cite_20",
"@cite_2"
],
"mid": [
"2137344683",
"1604562315",
"2062999595",
"2595734385"
],
"abstract": [
"To optimize system performance for a specific target application, embedded system designers may add some new instructions, called application-specific instructions (ASIs), by automatic design flow. In past days, most application-specific instruction-set processor (ASIP) researches focus on reducing instruction latency to improve performance regardless of the impact of memory access. In this paper, a design flow is proposed to automatically generate ASIs and to compare the performance between considering register transferring and regardless of it. The experiment results show the proposed approach can achieve up to 14 performance improvement and 10 memory access reduction comparing to no register transferring consideration.",
"ASIPs are designed specifically for a particular application or a set of applications. Their instruction sets must be carefully tailored to provide high performance as well as to meet non-functional constraints such as silicon area and power consumption. Traditionally, evaluation of different candidate instruction sets is all carried out through simulation. However, the growing design complexity and time-to-market pressure have rendered simulation increasingly infeasible. In this paper, we present an instruction level modeling method that can rapidly evaluates several important aspects of a selected instruction set. Experimental results show that we can prune a large number of candidate instruction sets with the model, accelerate design space exploration and alleviate the pressure on simulation",
"The design of application-specific instruction set processor(ASIP) system includes at least three interdependent tasks: microarchitecture design, instruction set design, and instruction set mapping for the application. We present a method that unifies these three design problems with a single formulation: a modified scheduling allocation problem with an integrated instruction formation process. Micro-operations (MOPs) representing the application are scheduled into time steps. Instructions are formed and hardware resources are allocated during the scheduling process. The assembly code for the given application is obtained automatically at the end of the scheduling process. This approach considers MOP parallelism, instruction field encoding, delay load store branch, conditional execution of MOPs and the retargetability to various architecture templates. Experiments are presented to show the power and limitation of our approach. Performance improvement over our previous work is significant.",
"Extensible processors are application-specific instruction set processors (ASIPs) that allow for customisation through user-defined instruction set extensions (ISE) implemented in an extended micro architecture. Traditional design flows for ISE typically involve a large number of different tools for processing of the target application written in C, ISE identification, generation, optimisation and synthesis of additional functional units. Furthermore, ISE exploitation is typically restricted to the specific application the new instructions have been derived from. This is due to the lack of instruction selection technology that is capable of generating code for complex, multiple-input multiple-output instructions. In this paper we present a complete tool-chain based on GCC for automated instruction set extension, micro-architecture optimisation and complex instruction selection. We demonstrate that our approach is capable of generating highly efficient ISEs, trading off area and performance constraints, and exploit complex custom instruction patterns in an extended GCC platform."
]
}
|
1403.7291
|
2541997769
|
Efficiency in embedded systems is paramount to achieve high performance while consuming less area and power. Processors in embedded systems have to be designed carefully to achieve such design constraints. Application Specific Instruction set Processors (ASIPs) exploit the nature of applications to design an optimal instruction set. Despite being not general to execute any application, ASIPs are highly preferred in the embedded systems industry where the devices are produced to satisfy a certain type of application domain s (either intra-domain or inter-domain). Typically, ASIPs are designed from a base-processor and functionalities are added for applications. This paper studies the multi-application ASIPs and their instruction sets, extensively analysing the instructions for inter-domain and intra-domain designs. Metrics analysed are the reusable instructions and the extra cost to add a certain application. A wide range of applications from various application benchmarks (MiBench, MediaBench and SPEC2006) and domains are analysed for two different architectures (ARM-Thumb and PISA). Our study shows that the intra-domain applications contain larger number of common instructions, whereas the inter-domain applications have very less common instructions, regardless of the architecture (and therefore the ISA).
|
Once the instructions are chosen for an ASIP, the selected instructions are evaluated. Authors in @cite_5 and @cite_1 introduced methods to evaluate instruction sets with several design constraints. @cite_21 automatically grouped and evaluated data-flow operations in the application as potential custom instructions. A symbolic algebra approach is utilised to generate the custom instructions with high level arithmetic optimisations.
|
{
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_1"
],
"mid": [
"2145946269",
"2132660104",
"2543071820"
],
"abstract": [
"In instruction set serves as the interface between hardware and software in a computer system. In an application specific environment, the system performance can be improved by designing an instruction set that matches the characteristics of hardware and the application. We present a systematic approach to generate application-specific instruction sets so that software applications can be efficiently mapped to a given pipelined micro-architecture. The approach synthesizes instruction sets from application benchmarks, given a machine model, an objective function, and a set of design constraints. In addition, assembly code is generated to show how the benchmarks can be compiled with the synthesized instruction set. The problem of designing instruction sets is formulated as a modified scheduling problem. A binary tuple is proposed to model the semantics of instructions and integrate the instruction formation process into the scheduling process. A simulated annealing scheme is used to solve for the schedules. Experiments have shown that the approach is capable of synthesizing powerful instructions for modern pipelined microprocessors, and running with reasonable time and a modest amount of memory for large applications. >",
"There is a growing demand for application-specific embedded processors in system-on-a-chip designs. Current tools and design methodologies often require designers to manually specialize the processor based on an application. Moreover, the use of the new complex instructions added to the processor is often left to designers' ingenuity. We present a solution that automatically groups dataflow operations in the application software as potential new complex instructions. The set of possible instructions is then automatically used for code generation combined with high-level arithmetic optimizations using symbolic algebra. Symbolic arithmetic manipulations provide a novel and effective method for instruction selection that is necessary due to the complexity of the automatically identified instructions. We have used our methodology to automatically add new instructions to Tensilica processors for a set of examples. Our results show that our tools improve designers productivity and efficiently specialize an embedded processor for the given application such that the execution time is greatly improved.",
"The increasing use of digital signal processors (DSPs) and application specific instruction-set processors (ASIPs) has put a strain on the perceived mature state of compiler technology. The presence of custom hardware for application-specific needs has introduced instruction types which are unfamiliar to the capabilities of traditional compilers. Thus, these traditional techniques can lead to inefficient and sparsely compacted machine microcode. In this paper, we introduce a novel instruction-set matching and selection methodology, based upon a rich representation useful for DSP and mixed control-oriented applications. This representation shows explicit behaviour that references architecture resource classes. This allows a wide range of instructions types to be captured in a pattern set. The pattern set has been organized in a manner such that matching is extremely efficient and retargeting to architectures with new instruction sets is well defined. The matching and selection algorithms have been implemented in a retargetable code generation system called CodeSyn. >"
]
}
|
1403.7209
|
1901281515
|
Hydra is a full-scale industrial CFD application used for the design of turbomachinery at Rolls Royce plc., capable of performing complex simulations over highly detailed unstructured mesh geometries. Hydra presents major challenges in data organization and movement that need to be overcome for continued high performance on emerging platforms. We present research in achieving this goal through the OP2 domain-specific high-level framework, demonstrating the viability of such a high-level programming approach. OP2 targets the domain of unstructured mesh problems and enables execution on a range of back-end hardware platforms. We chart the conversion of Hydra to OP2, and map out the key difficulties encountered in the process. Specifically we show how different parallel implementations can be achieved with an active library framework, even for a highly complicated industrial application and how different optimizations targeting contrasting parallel architectures can be applied to the whole application, seamlessly, reducing developer effort and increasing code longevity. Performance results demonstrate that not only the same runtime performance as that of the hand-tuned original code could be achieved, but it can be significantly improved on conventional processor systems, and many-core systems. Our results provide evidence of how high-level frameworks such as OP2 enable portability across a wide range of contrasting platforms and their significant utility in achieving high performance without the intervention of the application programmer.
|
OP2 can be viewed as an instantiation of the AEcute (access-execute descriptor) @cite_25 programming model that separates the specification of a computational kernel with its parallel iteration space, from a declarative specification of how each iteration accesses its data. The decoupled Access Execute specification in turn creates the opportunity to apply powerful optimizations targeting the underlying hardware. A number of research projects have implemented similar or related programming frameworks. Liszt @cite_18 and FEniCS @cite_35 specifically target mesh based computations.
|
{
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_25"
],
"mid": [
"2050956934",
"2154968583",
"1484160293"
],
"abstract": [
"A compiler approach for generating low-level computer code from high-level input for discontinuous Galerkin finite element forms is presented. The input language mirrors conventional mathematical notation, and the compiler generates efficient code in a standard programming language. This facilitates the rapid generation of efficient code for general equations in varying spatial dimensions. Key concepts underlying the compiler approach and the automated generation of computer code are elaborated. The approach is demonstrated for a range of common problems, including the Poisson, biharmonic, advection-diffusion, and Stokes equations.",
"Heterogeneous computers with processors and accelerators are becoming widespread in scientific computing. However, it is difficult to program hybrid architectures and there is no commonly accepted programming model. Ideally, applications should be written in a way that is portable to many platforms, but providing this portability for general programs is a hard problem. By restricting the class of programs considered, we can make this portability feasible. We present Liszt, a domain-specific language for constructing mesh-based PDE solvers. We introduce language statements for interacting with an unstructured mesh, and storing data at its elements. Program analysis of these statements enables our compiler to expose the parallelism, locality, and synchronization of Liszt programs. Using this analysis, we generate applications for multiple platforms: a cluster, an SMP, and a GPU. This approach allows Liszt applications to perform within 12 of hand-written C++, scale to large clusters, and experience order-of-magnitude speedups on GPUs.",
"On multi-core architectures with software-managed memories, effectively orchestrating data movement is essential to performance, but is tedious and error-prone. In this paper we show that when the programmer can explicitly specify both the memory access pattern and the execution schedule of a computation kernel, the compiler or run-time system can derive efficient data movement, even if analysis of kernel code is difficult or impossible. We have developed a framework of C++ classes for decoupled Access Execute specifications, allowing for automatic communication optimisations such as software pipelining and data reuse. We demonstrate the ease and efficiency of programming the Cell Broadband Engine architecture using these classes by implementing a set of benchmarks, which exhibit data reuse and non-affine access functions, and by comparing these implementations against alternative implementations, which use hand-written DMA transfers and software-based caching."
]
}
|
1403.7209
|
1901281515
|
Hydra is a full-scale industrial CFD application used for the design of turbomachinery at Rolls Royce plc., capable of performing complex simulations over highly detailed unstructured mesh geometries. Hydra presents major challenges in data organization and movement that need to be overcome for continued high performance on emerging platforms. We present research in achieving this goal through the OP2 domain-specific high-level framework, demonstrating the viability of such a high-level programming approach. OP2 targets the domain of unstructured mesh problems and enables execution on a range of back-end hardware platforms. We chart the conversion of Hydra to OP2, and map out the key difficulties encountered in the process. Specifically we show how different parallel implementations can be achieved with an active library framework, even for a highly complicated industrial application and how different optimizations targeting contrasting parallel architectures can be applied to the whole application, seamlessly, reducing developer effort and increasing code longevity. Performance results demonstrate that not only the same runtime performance as that of the hand-tuned original code could be achieved, but it can be significantly improved on conventional processor systems, and many-core systems. Our results provide evidence of how high-level frameworks such as OP2 enable portability across a wide range of contrasting platforms and their significant utility in achieving high performance without the intervention of the application programmer.
|
The FEniCS @cite_35 project defines a high-level language, UFL, for the specification of finite element algorithms. The FEniCS abstraction allows the user to express the problem in terms of differential equations, leaving the details of the implementation to a lower-library. Currently, a runtime code generation, compilation and execution framework that is based on OP2, called PyOP2 @cite_23 , is being developed at Imperial College London to enable UFL declarations to use similar back-end code generation strategies to OP2. Thus, the performance results in this paper will be directly applicable to the performance of code generated by FEniCS in the future.
|
{
"cite_N": [
"@cite_35",
"@cite_23"
],
"mid": [
"2050956934",
"2038646072"
],
"abstract": [
"A compiler approach for generating low-level computer code from high-level input for discontinuous Galerkin finite element forms is presented. The input language mirrors conventional mathematical notation, and the compiler generates efficient code in a standard programming language. This facilitates the rapid generation of efficient code for general equations in varying spatial dimensions. Key concepts underlying the compiler approach and the automated generation of computer code are elaborated. The approach is demonstrated for a range of common problems, including the Poisson, biharmonic, advection-diffusion, and Stokes equations.",
"Emerging many-core platforms are very difficult to program in a performance portable manner whilst achieving high efficiency on a diverse range of architectures. We present work in progress on PyOP2, a high-level embedded domain-specific language for mesh-based simulation codes that executes numerical kernels in parallel over unstructured meshes. Just-in-time kernel compilation and parallel scheduling are delayed until runtime, when problem-specific parameters are available. Using generative metaprogramming, performance portability is achieved, while details of the parallel implementation are abstracted from the programmer. PyOP2 kernels for finite element computations can be generated automatically from equations given in the domain-specific Unified Form Language. Interfacing to the multi-phase CFD code Fluidity through a very thin layer on top of PyOP2 yields a general purpose finite element solver with an input notation very close to mathematical formulae. Preliminary performance figures show speedups of up to 3.4× compared to Fluidity's built-in solvers when running in parallel."
]
}
|
1403.7209
|
1901281515
|
Hydra is a full-scale industrial CFD application used for the design of turbomachinery at Rolls Royce plc., capable of performing complex simulations over highly detailed unstructured mesh geometries. Hydra presents major challenges in data organization and movement that need to be overcome for continued high performance on emerging platforms. We present research in achieving this goal through the OP2 domain-specific high-level framework, demonstrating the viability of such a high-level programming approach. OP2 targets the domain of unstructured mesh problems and enables execution on a range of back-end hardware platforms. We chart the conversion of Hydra to OP2, and map out the key difficulties encountered in the process. Specifically we show how different parallel implementations can be achieved with an active library framework, even for a highly complicated industrial application and how different optimizations targeting contrasting parallel architectures can be applied to the whole application, seamlessly, reducing developer effort and increasing code longevity. Performance results demonstrate that not only the same runtime performance as that of the hand-tuned original code could be achieved, but it can be significantly improved on conventional processor systems, and many-core systems. Our results provide evidence of how high-level frameworks such as OP2 enable portability across a wide range of contrasting platforms and their significant utility in achieving high performance without the intervention of the application programmer.
|
While OP2 uses an active library approach utilizing code transformation, Delite @cite_45 and Liszt @cite_18 from Stanford University take the external approach to domain specific languages that require advanced compiler technology, but offer more freedom in implementing the language. Liszt targets unstructured grid applications; the aim, as with OP2, is to exploit information about the structure of data and the nature of the algorithms in the code and to apply aggressive and platform specific optimizations. Performance results from a range of systems (a single GPU, a multi-core CPU, and an MPI based cluster) executing a number of applications written using Liszt have been presented in @cite_18 . The Navier-Stokes application in @cite_18 is most comparable to OP2's Airfoil benchmark @cite_40 and shows similar speed-ups to those gained with OP2 in our work. Application performance on heterogeneous clusters such as on clusters of GPUs is not considered in @cite_18 and is noted as future work.
|
{
"cite_N": [
"@cite_40",
"@cite_18",
"@cite_45"
],
"mid": [
"2075537166",
"2154968583",
""
],
"abstract": [
"We present a performance analysis and benchmarking study of the OP2 \"active\" library, which provides an abstraction framework for the solution of parallel unstructured mesh applications. OP2 aims to decouple the scientific specification of the application from its parallel implementation, achieving code longevity and near-optimal performance through re-targeting the back-end to different hardware. Runtime performance results are presented for a representative unstructured mesh application written using OP2 on a variety of many-core processor systems, including the traditional X86 architectures from Intel (Xeon based on the older Penryn and current Nehalem micro-architectures) and GPU offerings from NVIDIA (GTX260, Tesla C2050). Our analysis demonstrates the contrasting performance between the use of CPU (OpenMP) and GPU (CUDA) parallel implementations for the solution on an industrial sized unstructured mesh consisting of about 1.5 million edges. Results show the significance of choosing the correct partition and thread-block configuration, the factors limiting the GPU performance and insights into optimizations for improved performance.",
"Heterogeneous computers with processors and accelerators are becoming widespread in scientific computing. However, it is difficult to program hybrid architectures and there is no commonly accepted programming model. Ideally, applications should be written in a way that is portable to many platforms, but providing this portability for general programs is a hard problem. By restricting the class of programs considered, we can make this portability feasible. We present Liszt, a domain-specific language for constructing mesh-based PDE solvers. We introduce language statements for interacting with an unstructured mesh, and storing data at its elements. Program analysis of these statements enables our compiler to expose the parallelism, locality, and synchronization of Liszt programs. Using this analysis, we generate applications for multiple platforms: a cluster, an SMP, and a GPU. This approach allows Liszt applications to perform within 12 of hand-written C++, scale to large clusters, and experience order-of-magnitude speedups on GPUs.",
""
]
}
|
1403.6968
|
1973761208
|
Many analytics tasks and machine learning problems can be naturally expressed by iterative linear algebra programs. In this paper, we study the incremental view maintenance problem for such complex analytical queries. We develop a framework, called LINVIEW, for capturing deltas of linear algebra programs and understanding their computational cost. Linear algebra operations tend to cause an avalanche effect where even very local changes to the input matrices spread out and infect all of the intermediate results and the final view, causing incremental view maintenance to lose its performance benefit over re-evaluation. We develop techniques based on matrix factorizations to contain such epidemics of change. As a consequence, our techniques make incremental view maintenance of linear algebra practical and usually substantially cheaper than re-evaluation. We show, both analytically and experimentally, the usefulness of these techniques when applied to standard analytics tasks. Our evaluation demonstrates the efficiency of LINVIEW in generating parallel incremental programs that outperform re-evaluation techniques by more than an order of magnitude.
|
Iterative Computation. Designing frameworks and computation models for iterative and incremental computation has received much attention lately. Differential dataflow @cite_36 represents a new model of incremental computation for iterative algorithms, which relies on differences (deltas) being smaller and computationally cheaper than the inputs. This assumption does not hold for linear algebra programs because of the avalanche effect of input changes. Many systems optimize the MapReduce framework for iterative applications using techniques that cache and index loop-invariant data on local disks and persist materialized views between iterations @cite_3 @cite_11 @cite_17 . More general systems support iterative computation and the DAG execution model, like Dryad @cite_13 and Spark @cite_33 ; Mahout, MLbase @cite_22 and others @cite_9 @cite_24 @cite_44 provide scalable machine learning and data mining tools. All these systems are orthogonal to our work. This paper is concerned with the efficient re-evaluation of programs under incremental changes. Our framework, however, can be easily coupled with any of these underlying systems.
|
{
"cite_N": [
"@cite_11",
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_44",
"@cite_13",
"@cite_17"
],
"mid": [
"1976860187",
"2189465200",
"",
"",
"2013373704",
"2013344760",
"",
"",
"2100830825",
"2120458882"
],
"abstract": [
"MapReduce programming model has simplified the implementation of many data parallel applications. The simplicity of the programming model and the quality of services provided by many implementations of MapReduce attract a lot of enthusiasm among distributed computing communities. From the years of experience in applying MapReduce to various scientific applications we identified a set of extensions to the programming model and improvements to its architecture that will expand the applicability of MapReduce to more classes of applications. In this paper, we present the programming model and the architecture of Twister an enhanced MapReduce runtime that supports iterative MapReduce computations efficiently. We also show performance comparisons of Twister with other similar runtimes such as Hadoop and DryadLINQ for large scale data parallel applications.",
"MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time.",
"",
"",
"Many modern enterprises are collecting data at the most detailed level possible, creating data repositories ranging from terabytes to petabytes in size. The ability to apply sophisticated statistical analysis methods to this data is becoming essential for marketplace competitiveness. This need to perform deep analysis over huge data repositories poses a significant challenge to existing statistical software and data management systems. On the one hand, statistical software provides rich functionality for data analysis and modeling, but can handle only limited amounts of data; e.g., popular packages like R and SPSS operate entirely in main memory. On the other hand, data management systems - such as MapReduce-based systems - can scale to petabytes of data, but provide insufficient analytical functionality. We report our experiences in building Ricardo, a scalable platform for deep analytics. Ricardo is part of the eXtreme Analytics Platform (XAP) project at the IBM Almaden Research Center, and rests on a decomposition of data-analysis algorithms into parts executed by the R statistical analysis system and parts handled by the Hadoop data management system. This decomposition attempts to minimize the transfer of data across system boundaries. Ricardo contrasts with previous approaches, which try to get along with only one type of system, and allows analysts to work on huge datasets from within a popular, well supported, and powerful analysis environment. Because our approach avoids the need to re-implement either statistical or data-management functionality, it can be used to solve complex problems right now.",
"The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce and Dryad are two popular platforms in which the dataflow takes the form of a directed acyclic graph of operators. These platforms lack built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, model fitting, and so on. This paper presents HaLoop, a modified version of the Hadoop MapReduce framework that is designed to serve these applications. HaLoop not only extends MapReduce with programming support for iterative applications, it also dramatically improves their efficiency by making the task scheduler loop-aware and by adding various caching mechanisms. We evaluated HaLoop on real queries and real datasets. Compared with Hadoop, on average, HaLoop reduces query runtimes by 1.85, and shuffles only 4 of the data between mappers and reducers.",
"",
"",
"Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad application combines computational \"vertices\" with communication \"channels\" to form a dataflow graph. Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through flies, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources. Dryad is designed to scale from powerful multi-core single computers, through small clusters of computers, to data centers with thousands of computers. The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.",
"Iterative computation is pervasive in many applications such as data mining, web ranking, graph analysis, online social network analysis, and so on. These iterative applications typically involve massive data sets containing millions or billions of data records. This poses demand of distributed computing frameworks for processing massive data sets on a cluster of machines. MapReduce is an example of such a framework. However, MapReduce lacks built-in support for iterative process that requires to parse data sets iteratively. Besides specifying MapReduce jobs, users have to write a driver program that submits a series of jobs and performs convergence testing at the client. This paper presents iMapReduce, a distributed framework that supports iterative processing. iMapReduce allows users to specify the iterative computation with the separated map and reduce functions, and provides the support of automatic iterative processing within a single job. More importantly, iMapReduce significantly improves the performance of iterative implementations by (1) reducing the overhead of creating new MapReduce jobs repeatedly, (2) eliminating the shuffling of static data, and (3) allowing asynchronous execution of map tasks. We implement an iMapReduce prototype based on Apache Hadoop, and show that iMapReduce can achieve up to 5 times speedup over Hadoop for implementing iterative algorithms."
]
}
|
1403.6968
|
1973761208
|
Many analytics tasks and machine learning problems can be naturally expressed by iterative linear algebra programs. In this paper, we study the incremental view maintenance problem for such complex analytical queries. We develop a framework, called LINVIEW, for capturing deltas of linear algebra programs and understanding their computational cost. Linear algebra operations tend to cause an avalanche effect where even very local changes to the input matrices spread out and infect all of the intermediate results and the final view, causing incremental view maintenance to lose its performance benefit over re-evaluation. We develop techniques based on matrix factorizations to contain such epidemics of change. As a consequence, our techniques make incremental view maintenance of linear algebra practical and usually substantially cheaper than re-evaluation. We show, both analytically and experimentally, the usefulness of these techniques when applied to standard analytics tasks. Our evaluation demonstrates the efficiency of LINVIEW in generating parallel incremental programs that outperform re-evaluation techniques by more than an order of magnitude.
|
Scientific Databases. There are also database systems specialized in array processing. RasDaMan @cite_4 and AML @cite_42 provide support for expressing and optimizing queries over multidimensional arrays, but are not geared towards scientific and numerical computing. ASAP @cite_45 supports scientific computing primitives on a storage manager optimized for storing multidimensional arrays. RIOT @cite_8 also provides an efficient framework for scientific computing. However, none of these systems support incremental view maintenance for their workloads.
|
{
"cite_N": [
"@cite_8",
"@cite_42",
"@cite_4",
"@cite_45"
],
"mid": [
"2963215849",
"2092324613",
"2083036673",
"2187405563"
],
"abstract": [
"R is a numerical computing environment that is widely popular for statistical data analysis. Like many such environments, R performs poorly for large datasets whose sizes exceed that of physical mem- ory. We present our vision of RIOT (R with I O Transparency), a system that makes R programs I O-efficient in a way transpar- ent to the users. We describe our experience with RIOT-DB, an initial prototype that uses a relational database system as a back- end. Despite the overhead and inadequacy of generic database sys- tems in handling array data and numerical computation, RIOT-DB significantly outperforms R in many large-data scenarios, thanks to a suite of high-level, inter-operation optimizations that integrate seamlessly into R. While many techniques in RIOT are inspired by databases (and, for RIOT-DB, realized by a database system), RIOT users are insulated from anything database related. Compared with previous approaches that require users to learn new languages and rewrite their programs to interface with a database, RIOT will, we believe, be easier to adopt by the majority of the R users.",
"Arrays are a common and important class of data. At present, database systems do not provide adequate array support: arrays can neither be easily defined nor conveniently manipulated. Further, array manipulations are not optimized. This paper describes a language called the Array Manipulation Language (AML), for expressing array manipulations, and a collection of optimization techniques for AML expressions.In the AML framework for array manipulation, arbitrary externally-defined functions can be applied to arrays in a structured manner. AML can be adapted to different application domains by choosing appropriate external function definitions. This paper concentrates on arrays occurring in databases of digital images such as satellite or medical images.AML queries can be treated declaratively and subjected to rewrite optimizations. Rewriting minimizes the number of applications of potentially costly external functions required to compute a query result. AML queries can also be optimized for space. Query results are generated a piece at a time by pipelined execution plans, and the amount of memory required by a plan depends on the order in which pieces are generated. An optimizer can consider generating the pieces of the query result in a variety of orders, and can efficiently choose orders that require less space. An AML-based prototype array database system called ArrayDB has been built, and it is used to show the effectiveness of these optimization techniques.",
"RasDaMan is a universal — i.e., domain-independent — array DBMS for multidimensional arrays of arbitrary size and structure. A declarative, SQL-based array query language offers flexible retrieval and manipulation. Efficient server-based query evaluation is enabled by an intelligent optimizer and a streamlined storage architecture based on flexible array tiling and compression. RasDaMan is being used in several international projects for the management of geo and healthcare data of various dimensionality.",
"Two years ago, some of us wrote a paper predicting the demise of “One Size Fits All (OSFA)” [Sto05a]. In that paper, we examined the stream processing and data warehouse markets and gave reasons for a substantial performance advantage to specialized architectures in both markets. Herein, we make three additional contributions. First, we present reasons why the same performance advantage is enjoyed by specialized implementations in the text processing market. Second, the major contribution of the paper is to show “apples to apples” performance numbers between commercial implementations of specialized architectures and relational DBMSs in both stream processing and data warehouses. Finally, we also show comparison numbers between an academic prototype of a specialized architecture for scientific and intelligence applications, a relational DBMS, and a widely used mathematical computation tool. In summary, there appear to be at least four markets where specialized architectures enjoy an overwhelming performance advantage. 1. The History of the OSFA Architecture"
]
}
|
1403.6968
|
1973761208
|
Many analytics tasks and machine learning problems can be naturally expressed by iterative linear algebra programs. In this paper, we study the incremental view maintenance problem for such complex analytical queries. We develop a framework, called LINVIEW, for capturing deltas of linear algebra programs and understanding their computational cost. Linear algebra operations tend to cause an avalanche effect where even very local changes to the input matrices spread out and infect all of the intermediate results and the final view, causing incremental view maintenance to lose its performance benefit over re-evaluation. We develop techniques based on matrix factorizations to contain such epidemics of change. As a consequence, our techniques make incremental view maintenance of linear algebra practical and usually substantially cheaper than re-evaluation. We show, both analytically and experimentally, the usefulness of these techniques when applied to standard analytics tasks. Our evaluation demonstrates the efficiency of LINVIEW in generating parallel incremental programs that outperform re-evaluation techniques by more than an order of magnitude.
|
Incremental Statistical Frameworks. Bayesian inference @cite_32 uses Bayes' rule to update the probability estimate for a hypothesis as additional evidence is acquired. These frameworks support a variety of applications such as pattern recognition and classification. Our work focuses on incrementalizing applications that can be expressed as linear algebra programs and generating efficient incremental programs for different runtime environments.
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"1988520084"
],
"abstract": [
"An overview of statistical decision theory, which emphasizes the use and application of the philosophical ideas and mathematical structure of decision theory. The text assumes a knowledge of basic probability theory and some advanced calculus is also required."
]
}
|
1403.6838
|
2950353880
|
Information overload has become an ubiquitous problem in modern society. Social media users and microbloggers receive an endless flow of information, often at a rate far higher than their cognitive abilities to process the information. In this paper, we conduct a large scale quantitative study of information overload and evaluate its impact on information dissemination in the Twitter social media site. We model social media users as information processing systems that queue incoming information according to some policies, process information from the queue at some unknown rates and decide to forward some of the incoming information to other users. We show how timestamped data about tweets received and forwarded by users can be used to uncover key properties of their queueing policies and estimate their information processing rates and limits. Such an understanding of users' information processing behaviors allows us to infer whether and to what extent users suffer from information overload. Our analysis provides empirical evidence of information processing limits for social media users and the prevalence of information overloading. The most active and popular social media users are often the ones that are overloaded. Moreover, we find that the rate at which users receive information impacts their processing behavior, including how they prioritize information from different sources, how much information they process, and how quickly they process information. Finally, the susceptibility of a social media user to social contagions depends crucially on the rate at which she receives information. An exposure to a piece of information, be it an idea, a convention or a product, is much less effective for users that receive information at higher rates, meaning they need more exposures to adopt a particular contagion.
|
The work most closely related ours @cite_28 @cite_27 @cite_13 investigates the impact that the amount of social ties of a social media user has on the way she interacts or exchanges information with her friends, followees or contacts. measure the way in which an individual divides his or her attention across contacts by analyzing Facebook data. Their analysis suggests that some people focus most of their attention on a small circle of close friends, while others disperse their attention more broadly over a large set. quantify how a user' s limited attention is divided among information sources (or followees) by tracking URLs as mar -kers of information in Twitter. They provide empirical evi -dence that highly connected individuals are less likely to propagate an arbitrary tweet. analyze mobile phone call data and note that individuals exhibit a finite commu -ni -cation capacity, which limits the number of ties they can maintain active. The common theme is to investigate whether there is a limit on the amount of ties ( , friends, followees or phone contacts) people can maintain, and how people distribute attention across them.
|
{
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_13"
],
"mid": [
"35874363",
"1965563190",
""
],
"abstract": [
"An individual's personal network — their set of social contacts — is a basic object of study in sociology. Studies of personal networks have focused on their size (the number of contacts) and their composition (in terms of categories such as kin and co-workers). Here we propose a new measure for the analysis of personal networks, based on the way in which an individual divides his or her attention across contacts. This allows us to contrast people who focus a large fraction of their interactions on a small set of close friends with people who disperse their attention more widely. Using data from Facebook, we find that this balance of attention is a relatively stable property of an individual over time, and that it displays interesting variation across both different groups of people and different modes of interaction. In particular, activities based on communication involve a much higher focus of attention than activities based simply on observation, and these two types of modalities also exhibit different forms of variation in interaction patterns both within and across groups. Finally, we contrast the amount of attention paid by individuals to their most frequent contacts with the rate of change in the identities of these contacts, providing a measure of churn for this set.",
"How far and how fast does information spread in social media? Researchers have recently examined a number of factors that affect information diffusion in online social networks, including: the novelty of information, users' activity levels, who they pay attention to, and how they respond to friends' recommendations. Using URLs as markers of information, we carry out a detailed study of retweeting, the primary mechanism by which information spreads on the Twitter follower graph. Our empirical study examines how users respond to an incoming stimulus, i.e., a tweet (message) from a friend, and reveals that dynamically decaying visibility, which is the increasing cognitive effort required for discovering and acting upon a tweet, combined with limited attention play dominant roles in retweeting behavior. Specifically, we observe that users retweet information when it is most visible, such as when it near the top of their Twitter feed. Moreover, our measurements quantify how a user's limited attention is divided among incoming tweets, providing novel evidence that highly connected individuals are less likely to propagate an arbitrary tweet. Our study indicates that the finite ability to process incoming information constrains social contagion, and we conclude that rapid decay of visibility is the primary barrier to information propagation online.",
""
]
}
|
1403.6838
|
2950353880
|
Information overload has become an ubiquitous problem in modern society. Social media users and microbloggers receive an endless flow of information, often at a rate far higher than their cognitive abilities to process the information. In this paper, we conduct a large scale quantitative study of information overload and evaluate its impact on information dissemination in the Twitter social media site. We model social media users as information processing systems that queue incoming information according to some policies, process information from the queue at some unknown rates and decide to forward some of the incoming information to other users. We show how timestamped data about tweets received and forwarded by users can be used to uncover key properties of their queueing policies and estimate their information processing rates and limits. Such an understanding of users' information processing behaviors allows us to infer whether and to what extent users suffer from information overload. Our analysis provides empirical evidence of information processing limits for social media users and the prevalence of information overloading. The most active and popular social media users are often the ones that are overloaded. Moreover, we find that the rate at which users receive information impacts their processing behavior, including how they prioritize information from different sources, how much information they process, and how quickly they process information. Finally, the susceptibility of a social media user to social contagions depends crucially on the rate at which she receives information. An exposure to a piece of information, be it an idea, a convention or a product, is much less effective for users that receive information at higher rates, meaning they need more exposures to adopt a particular contagion.
|
Last, very recently, there have been attempts to analyze and model information propagation assuming competition and cooperation between contagions @cite_1 @cite_16 @cite_2 . However, this line of work considers only interactions between pairs of contagions @cite_16 or present theoretical models without experimental validation @cite_1 . In contrast, our work investigates the effect - that observing a great number of contagions simultaneously propagating, not necessarily informative or in -te -res -ting, has on the ability of a person to process and forward information. We believe this is a key point for understanding information propagation in the age of information overload.
|
{
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_2"
],
"mid": [
"2040811557",
"2124172444",
""
],
"abstract": [
"In networks, contagions such as information, purchasing behaviors, and diseases, spread and diffuse from node to node over the edges of the network. Moreover, in real-world scenarios multiple contagions spread through the network simultaneously. These contagions not only propagate at the same time but they also interact and compete with each other as they spread over the network. While traditional empirical studies and models of diffusion consider individual contagions as independent and thus spreading in isolation, we study how different contagions interact with each other as they spread through the network. We develop a statistical model that allows for competition as well as cooperation of different contagions in information diffusion. Competing contagions decrease each other's probability of spreading, while cooperating contagions help each other in being adopted throughout the network. We evaluate our model on 18,000 contagions simultaneously spreading through the Twitter network. Our model learns how different contagions interact with each other and then uses these interactions to more accurately predict the diffusion of a contagion through the network. Moreover, the model also provides a compelling hypothesis for the principles that govern content interaction in information diffusion. Most importantly, we find very strong effects of interactions between contagions. Interactions cause a relative change in the spreading probability of a contagion by em 71 on the average.",
"We develop a game-theoretic framework for the study of competition between firms who have budgets to \"seed\" the initial adoption of their products by consumers located in a social network. The payoffs to the firms are the eventual number of adoptions of their product through a competitive stochastic diffusion process in the network. This framework yields a rich class of competitive strategies, which depend in subtle ways on the stochastic dynamics of adoption, the relative budgets of the players, and the underlying structure of the social network. We identify a general property of the adoption dynamics --- namely, decreasing returns to local adoption --- for which the inefficiency of resource use at equilibrium (the Price of Anarchy) is uniformly bounded above, across all networks. We also show that if this property is violated the Price of Anarchy can be unbounded, thus yielding sharp threshold behavior for a broad class of dynamics. We also introduce a new notion, the Budget Multiplier, that measures the extent that imbalances in player budgets can be amplified at equilibrium. We again identify a general property of the adoption dynamics --- namely, proportional local adoption between competitors --- for which the (pure strategy) Budget Multiplier is uniformly bounded above, across all networks. We show that a violation of this property can lead to unbounded Budget Multiplier, again yielding sharp threshold behavior for a broad class of dynamics.",
""
]
}
|
1403.7242
|
2121386362
|
Social networks have many counter-intuitive properties, including the "friendship paradox" that states, on average, your friends have more friends than you do. Recently, a variety of other paradoxes were demonstrated in online social networks. This paper explores the origins of these network paradoxes. Specifically, we ask whether they arise from mathematical properties of the networks or whether they have a behavioral origin. We show that sampling from heavy-tailed distributions always gives rise to a paradox in the mean, but not the median. We propose a strong form of network paradoxes, based on utilizing the median, and validate it empirically using data from two online social networks. Specifically, we show that for any user the majority of user's friends and followers have more friends, followers, etc. than the user, and that this cannot be explained by statistical properties of sampling. Next, we explore the behavioral origins of the paradoxes by using the shuffle test to remove correlations between node degrees and attributes. We find that paradoxes for the mean persist in the shuffled network, but not for the median. We demonstrate that strong paradoxes arise due to the assortativity of user attributes, including degree, and correlation between degree and attribute.
|
The friendship paradox and other paradoxes have been shown to occur in a variety of contexts, both online and offline. As many have pointed out, starting with Feld @cite_12 , these network paradoxes arise from users with high degree being overrepresented in the population of friends. Feld also claimed that the friendship paradox, that your friends have higher degree than you, holds with both the mean and the median. In this paper we build on existing works which have documented paradoxes using the mean, including @cite_8 @cite_4 @cite_21 @cite_24 @cite_3 @cite_11 .
|
{
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_12",
"@cite_11"
],
"mid": [
"2140095656",
"2949086544",
"1971930416",
"2964222315",
"1893161742",
"2084862036",
"1959141096"
],
"abstract": [
"Current methods for the detection of contagious outbreaks give contemporaneous information about the course of an epidemic at best. It is known that individuals near the center of a social network are likely to be infected sooner during the course of an outbreak, on average, than those at the periphery. Unfortunately, mapping a whole network to identify central individuals who might be monitored for infection is typically very difficult. We propose an alternative strategy that does not require ascertainment of global network structure, namely, simply monitoring the friends of randomly selected individuals. Such individuals are known to be more central. To evaluate whether such a friend group could indeed provide early detection, we studied a flu outbreak at Harvard College in late 2009. We followed 744 students who were either members of a group of randomly chosen individuals or a group of their friends. Based on clinical diagnoses, the progression of the epidemic in the friend group occurred 13.9 days (95 C.I. 9.9–16.6) in advance of the randomly chosen group (i.e., the population as a whole). The friend group also showed a significant lead time (p<0.05) on day 16 of the epidemic, a full 46 days before the peak in daily incidence in the population as a whole. This sensor method could provide significant additional time to react to epidemics in small or large populations under surveillance. The amount of lead time will depend on features of the outbreak and the network at hand. The method could in principle be generalized to other biological, psychological, informational, or behavioral contagions that spread in networks.",
"Recent research has focused on the monitoring of global-scale online data for improved detection of epidemics, mood patterns, movements in the stock market, political revolutions, box-office revenues, consumer behaviour and many other important phenomena. However, privacy considerations and the sheer scale of data available online are quickly making global monitoring infeasible, and existing methods do not take full advantage of local network structure to identify key nodes for monitoring. Here, we develop a model of the contagious spread of information in a global-scale, publicly-articulated social network and show that a simple method can yield not just early detection, but advance warning of contagious outbreaks. In this method, we randomly choose a small fraction of nodes in the network and then we randomly choose a \"friend\" of each node to include in a group for local monitoring. Using six months of data from most of the full Twittersphere, we show that this friend group is more central in the network and it helps us to detect viral outbreaks of the use of novel hashtags about 7 days earlier than we could with an equal-sized randomly chosen group. Moreover, the method actually works better than expected due to network structure alone because highly central actors are both more active and exhibit increased diversity in the information they transmit to others. These results suggest that local monitoring is not just more efficient, it is more effective, and it is possible that other contagious processes in global-scale networks may be similarly monitored.",
"We report on a survey of undergraduates at the University of Chicago in which respondents were asked to assess their popularity relative to others. Popularity estimates were related to actual popularity, but we also found strong evidence of self-enhancement in self-other comparisons of popularity. In particular, self-enhancement was stronger for self versus friend comparisons than for self versus typical other comparisons; this is contrary to the reality demonstrated in Feld's friendship paradox and suggests that people are more threatened by the success of friends than of strangers. At the same time, people with relatively popular friends tended to make more self-serving estimates of their own popularity than did people with less popular friends. These results clarify how objective patterns of interpersonal contact work together with cognitive and motivational tendencies to shape perceptions of one's location in the social world.",
"",
"We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91 of individuals belonging to a single large connected component, and we confirm the \"six degrees of separation\" phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which \"your friends have more friends than you\". Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics.",
"It is reasonable to suppose that individuals use the number of friends that their friends have as one basis for determining whether they, themselves, have an adequate number of friends. This article shows that, if individuals compare themselves with their friends, it is likely that most of them will feel relatively inadequate. Data on friendship drawn from James Coleman's (1961) classic study The Adolescent Society are used to illustrate the phenomenon that most people have fewer friends than their friends have. The logic underlying the phenomenon is mathematically explored, showing that the mean number of friends of friends is always greater than the mean number of friends of individuals. Further analysis shows that the proportion of individuals who have fewer friends than the mean number of friends their own friends have is affected by the exact arrangement of friendships in a social network. This disproportionate experiencing of friends with many friends is related to a set of",
"Feld's friendship paradox states that \"your friends have more friends than you, on average.\" This paradox arises because extremely popular people, despite being rare, are overrepresented when averaging over friends. Using a sample of the Twitter firehose, we confirm that the friendship paradox holds for >98 of Twitter users. Because of the directed nature of the follower graph on Twitter, we are further able to confirm more detailed forms of the friendship paradox: everyone you follow or who follows you has more friends and followers than you. This is likely caused by a correlation we demonstrate between Twitter activity, number of friends, and number of followers. In addition, we discover two new paradoxes: the virality paradox that states \"your friends receive more viral content than you, on average,\" and the activity paradox, which states \"your friends are more active than you, on average.\" The latter paradox is important in regulating online communication. It may result in users having difficulty maintaining optimal incoming information rates, because following additional users causes the volume of incoming tweets to increase super-linearly. While users may compensate for increased information flow by increasing their own activity, users become information overloaded when they receive more information than they are able or willing to process. We compare the average size of cascades that are sent and received by overloaded and underloaded users. And we show that overloaded users post and receive larger cascades and they are poor detector of small cascades."
]
}
|
1403.7242
|
2121386362
|
Social networks have many counter-intuitive properties, including the "friendship paradox" that states, on average, your friends have more friends than you do. Recently, a variety of other paradoxes were demonstrated in online social networks. This paper explores the origins of these network paradoxes. Specifically, we ask whether they arise from mathematical properties of the networks or whether they have a behavioral origin. We show that sampling from heavy-tailed distributions always gives rise to a paradox in the mean, but not the median. We propose a strong form of network paradoxes, based on utilizing the median, and validate it empirically using data from two online social networks. Specifically, we show that for any user the majority of user's friends and followers have more friends, followers, etc. than the user, and that this cannot be explained by statistical properties of sampling. Next, we explore the behavioral origins of the paradoxes by using the shuffle test to remove correlations between node degrees and attributes. We find that paradoxes for the mean persist in the shuffled network, but not for the median. We demonstrate that strong paradoxes arise due to the assortativity of user attributes, including degree, and correlation between degree and attribute.
|
demonstrated a variety of paradoxes on Twitter beyond the friendship paradox @cite_11 . They showed that users' friends and followers are more active and more highly connected. They claimed that any user attribute correlated with connectivity will ultimately result in a paradox. They don't establish if the activity or virality paradox results simply from the nature of heavy-tailed distributions or from how users choose to position themselves in the network, i.e. behavioral factors. On the other hand, identify some key paradoxes that cannot be explained simply by correlations between degree and activity, such as how overloaded users receive more viral content than underloaded users.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1959141096"
],
"abstract": [
"Feld's friendship paradox states that \"your friends have more friends than you, on average.\" This paradox arises because extremely popular people, despite being rare, are overrepresented when averaging over friends. Using a sample of the Twitter firehose, we confirm that the friendship paradox holds for >98 of Twitter users. Because of the directed nature of the follower graph on Twitter, we are further able to confirm more detailed forms of the friendship paradox: everyone you follow or who follows you has more friends and followers than you. This is likely caused by a correlation we demonstrate between Twitter activity, number of friends, and number of followers. In addition, we discover two new paradoxes: the virality paradox that states \"your friends receive more viral content than you, on average,\" and the activity paradox, which states \"your friends are more active than you, on average.\" The latter paradox is important in regulating online communication. It may result in users having difficulty maintaining optimal incoming information rates, because following additional users causes the volume of incoming tweets to increase super-linearly. While users may compensate for increased information flow by increasing their own activity, users become information overloaded when they receive more information than they are able or willing to process. We compare the average size of cascades that are sent and received by overloaded and underloaded users. And we show that overloaded users post and receive larger cascades and they are poor detector of small cascades."
]
}
|
1403.6950
|
2953098259
|
The goal of this paper is to identify individuals by analyzing their gait. Instead of using binary silhouettes as input data (as done in many previous works) we propose and evaluate the use of motion descriptors based on densely sampled short-term trajectories. We take advantage of state-of-the-art people detectors to define custom spatial configurations of the descriptors around the target person. Thus, obtaining a pyramidal representation of the gait motion. The local motion features (described by the Divergence-Curl-Shear descriptor) extracted on the different spatial areas of the person are combined into a single high-level gait descriptor by using the Fisher Vector encoding. The proposed approach, coined Pyramidal Fisher Motion, is experimentally validated on the recent AVA Multiview Gait' dataset. The results show that this new approach achieves promising results in the problem of gait recognition.
|
On the other hand, human action recognition (HAR) is related to gait recognition in the sense that the former also focuses on human motion, but tries to categorize such motion into categories of actions as , etc. In HAR, the work of @cite_15 is a key reference. They introduce the use of short-term trajectories of densely sampled points for describing human actions, obtaining state-of-the-art results in the HAR problem. The dense trajectories are described with the Motion Boundary Histogram. Then, they describe the video sequence by using the Bag of Words (BOW) model @cite_20 . Finally, they use a non-linear SVM with @math -kernel for classification. In parallel, Perronnin and Dance @cite_10 introduced a new way of histogram-based encoding for sets of local descriptors for image categorization: the Fisher Vector (FV) encoding. In FV, instead of just counting the number of occurrences of a visual word (i.e. quantized local descriptor) as in BOW, the concatenation of gradient vectors of a Gaussian Mixture is used. Thus, obtaining a larger but richer representation of the image.
|
{
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2126574503",
"2147238549",
"2131846894"
],
"abstract": [
"Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.",
"Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films."
]
}
|
1403.6614
|
2951898943
|
This paper presents a method to compute the quasi-conformal parameterization (QCMC) for a multiply-connected 2D domain or surface. QCMC computes a quasi-conformal map from a multiply-connected domain @math onto a punctured disk @math associated with a given Beltrami differential. The Beltrami differential, which measures the conformality distortion, is a complex-valued function @math with supremum norm strictly less than 1. Every Beltrami differential gives a conformal structure of @math . Hence, the conformal module of @math , which are the radii and centers of the inner circles, can be fully determined by @math , up to a M "obius transformation. In this paper, we propose an iterative algorithm to simultaneously search for the conformal module and the optimal quasi-conformal parameterization. The key idea is to minimize the Beltrami energy subject to the boundary constraints. The optimal solution is our desired quasi-conformal parameterization onto a punctured disk. The parameterization of the multiply-connected domain simplifies numerical computations and has important applications in various fields, such as in computer graphics and vision. Experiments have been carried out on synthetic data together with real multiply-connected Riemann surfaces. Results show that our proposed method can efficiently compute quasi-conformal parameterizations of multiply-connected domains and outperforms other state-of-the-art algorithms. Applications of the proposed parameterization technique have also been explored.
|
Parameterization has been widely studied and different parameterization algorihtms have been developed. The goal is to map a 2D complicated domain or 3D surface onto a simple parameter domain, such as the unit sphere or 2D rectangle. In general, 3D surfaces are not isometric to the simple parameter domains. As a result, parameterization usually causes distortion. @cite_10 proposed to obtain a close-to-isomtric parameterization, called the IsoMap, which minimizes geodesic distance distortion between pairs of vertices on the mesh. @cite_23 propose the discrete harmonic map for mesh parameterization, which approximates the continuous harmonic map by minimizing a metric dispersion criterion. @cite_32 proposed a parameterization, which is as area-preserving as possible, for texture mapping via optimal mass transportation. Graph embedding of a surface mesh has also been studied by Tutte @cite_33 . The parameterization technique, which is now called the Tutte's embedding, was introduced. The bijectivity of the parameterization is mathematically guaranteed. Floater @cite_21 improved the quality of the parameterization by introducing specific weights, in terms of area deformations and conformality.
|
{
"cite_N": [
"@cite_33",
"@cite_21",
"@cite_32",
"@cite_23",
"@cite_10"
],
"mid": [
"2700577",
"2111501452",
"2106025841",
"2075010828",
"2001141328"
],
"abstract": [
"",
"A method based on graph theory is investigated for creating global parametrizations for surface triangulations for the purpose of smooth surface fitting. The parametrizations, which are planar triangulations, are the solutions of linear systems based on convex combinations. A particular parametrization, called shape-preserving, is found to lead to visually smooth surface approximations.",
"In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the ?earth-mover's metric?). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.",
"In computer graphics and geometric modeling, shapes are often represented by triangular meshes. With the advent of laser scanning systems, meshes of extreme complexity are rapidly becoming commonplace. Such meshes are notoriously expensive to store, transmit, render, and are awkward to edit. Multiresolution analysis offers a simple, unified, and theoretically sound approach to dealing with these problems. have recently developed a technique for creating multiresolution representations for a restricted class of meshes with subdivision connectivity. Unfortunately, meshes encountered in practice typically do not meet this requirement. In this paper we present a method for overcoming the subdivision connectivity restriction, meaning that completely arbitrary meshes can now be converted to multiresolution form. The method is based on the approximation of an arbitrary initial mesh M by a mesh MJ that has subdivision connectivity and is guaranteed to be within a specified tolerance. The key ingredient of our algorithm is the construction of a parametrization of M over a simple domain. We expect this parametrization to be of use in other contexts, such as texture mapping or the approximation of complex meshes by NURBS patches. CR",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure."
]
}
|
1403.6347
|
2953047303
|
The @math -colouring reconfiguration problem asks whether, for a given graph @math , two proper @math -colourings @math and @math of @math , and a positive integer @math , there exists a sequence of at most @math proper @math -colourings of @math which starts with @math and ends with @math and where successive colourings in the sequence differ on exactly one vertex of @math . We give a complete picture of the parameterized complexity of the @math -colouring reconfiguration problem for each fixed @math when parameterized by @math . First we show that the @math -colouring reconfiguration problem is polynomial-time solvable for @math , settling an open problem of Cereceda, van den Heuvel and Johnson. Then, for all @math , we show that the @math -colouring reconfiguration problem, when parameterized by @math , is fixed-parameter tractable (addressing a question of Mouawad, Nishimura, Raman, Simjour and Suzuki) but that it has no polynomial kernel unless the polynomial hierarchy collapses.
|
The algorithmic question of whether @math is connected for a given @math is addressed in @cite_3 @cite_9 , where it is shown that the problem is coNP-complete for @math and bipartite @math , but polynomial-time solvable for planar bipartite @math .
|
{
"cite_N": [
"@cite_9",
"@cite_3"
],
"mid": [
"2103875227",
"2129794490"
],
"abstract": [
"For a 3-colourable graph G, the 3-colour graph of G, denoted C3(G), is the graph with node set the proper vertex 3-colourings of G, and two nodes adjacent whenever the corresponding colourings differ on precisely one vertex of G. We consider the following question : given G, how easily can we decide whether or not C3(G) is connected?We show that the 3-colour graph of a 3-chromatic graph is never connected, and characterise the bipartite graphs for which C3(G) is connected. We also show that the problem of deciding the connectedness of the 3-colour graph of a bipartite graph is coNP-complete, but that restricted to planar bipartite graphs, the question is answerable in polynomial time.",
"For a positive integer k and a graph G, the k-colour graph of G, C\"k(G), is the graph that has the proper k-vertex-colourings of G as its vertex set, and two k-colourings are joined by an edge in C\"k(G) if they differ in colour on just one vertex of G. In this note some results on the connectedness of C\"k(G) are proved. In particular, it is shown that if G has chromatic number k@? 2,3 , then C\"k(G) is not connected. On the other hand, for k>=4 there are graphs with chromatic number k for which C\"k(G) is not connected, and there are k-chromatic graphs for which C\"k(G) is connected."
]
}
|
1403.6347
|
2953047303
|
The @math -colouring reconfiguration problem asks whether, for a given graph @math , two proper @math -colourings @math and @math of @math , and a positive integer @math , there exists a sequence of at most @math proper @math -colourings of @math which starts with @math and ends with @math and where successive colourings in the sequence differ on exactly one vertex of @math . We give a complete picture of the parameterized complexity of the @math -colouring reconfiguration problem for each fixed @math when parameterized by @math . First we show that the @math -colouring reconfiguration problem is polynomial-time solvable for @math , settling an open problem of Cereceda, van den Heuvel and Johnson. Then, for all @math , we show that the @math -colouring reconfiguration problem, when parameterized by @math , is fixed-parameter tractable (addressing a question of Mouawad, Nishimura, Raman, Simjour and Suzuki) but that it has no polynomial kernel unless the polynomial hierarchy collapses.
|
Finally, the study of the diameter of @math raises interesting questions. @cite_12 it is shown that every component of @math has diameter polynomial (in fact quadratic) in the size of @math . On the other hand, for @math , explicit constructions @cite_5 are given of graphs @math for which @math has at least one component with diameter exponential in the size of @math . It is known that if @math is a @math -degenerate graph then @math is connected and it is conjectured that in this case @math has diameter polynomial in the size of @math @cite_3 ; for graphs of treewidth @math the conjecture has been proved in the affirmative @cite_14 .
|
{
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_3",
"@cite_12"
],
"mid": [
"",
"1982587570",
"2129794490",
"2097520314"
],
"abstract": [
"",
"Let k be an integer. Two vertex k-colorings of a graph are adjacent if they differ on exactly one vertex. A graph is k-mixing if any proper k-coloring can be transformed into any other through a sequence of adjacent proper k-colorings. Any graph is (tw+2)-mixing, where tw is the treewidth of the graph (Cereceda 2006). We prove that the shortest sequence between any two (tw+2)-colorings is at most quadratic, a problem left open in (2012). Jerrum proved that any graph is k-mixing if k is at least the maximum degree plus two. We improve Jerrumʼs bound using the grundy number, which is the worst number of colors in a greedy coloring.",
"For a positive integer k and a graph G, the k-colour graph of G, C\"k(G), is the graph that has the proper k-vertex-colourings of G as its vertex set, and two k-colourings are joined by an edge in C\"k(G) if they differ in colour on just one vertex of G. In this note some results on the connectedness of C\"k(G) are proved. In particular, it is shown that if G has chromatic number k@? 2,3 , then C\"k(G) is not connected. On the other hand, for k>=4 there are graphs with chromatic number k for which C\"k(G) is not connected, and there are k-chromatic graphs for which C\"k(G) is connected.",
"Given a 3-colorable graph G together with two proper vertex 3-colorings α and β of G, consider the following question: is it possible to transform α into β by recoloring vertices of G one at a time, making sure that all intermediate colorings are proper 3-colorings? We prove that this question is answerable in polynomial time. We do so by characterizing the instances G, α, β where the transformation is possible; the proof of this characterization is via an algorithm that either finds a sequence of recolorings between α and β, or exhibits a structure which proves that no such sequence exists. In the case that a sequence of recolorings does exist, the algorithm uses O(|V(G)|2) recoloring steps and in many cases returns a shortest sequence of recolorings. We also exhibit a class of instances G, α, β that require Ω(|V(G)|2) recoloring steps. © 2010 Wiley Periodicals, Inc. J Graph Theory 67: 69-82, 2011 © 2011 Wiley Periodicals, Inc."
]
}
|
1403.6632
|
1440974867
|
In this paper, the ByoRISC (Build your own RISC) configurable application-specific instruction-set processor (ASIP) family is presented. ByoRISCs, as vendor-independent cores, provide extensive architectural parameters over a baseline processor, which can be customized by application-specific hardware extensions (ASHEs). Such extensions realize multi-input multi-output (MIMO) custom instructions with local state and load store accesses to the data memory. ByoRISCs incorporate a true multi-port register file, zero-overhead custom instruction decoding, and scalable data forwarding mechanisms. Given these design decisions, ByoRISCs provide a unique combination of features that allow their use as architectural testbeds and the seamless and rapid development of new high-performance ASIPs. The performance characteristics of ByoRISCs, implemented as vendor-independent cores, have been evaluated for both ASIC and FPGA implementations, and it is proved that they provide a viable solution in FPGA-based system-on-a-chip design. A case study of an image processing pipeline is also presented to highlight the process of utilizing a ByoRISC custom processor. A peak performance speedup of up to 8.5 @math can be observed, whereas an average performance speedup of 4.4 @math on Xilinx Virtex-4 targets is achieved. In addition, ByoRISC outperforms an experimental VLIW architecture named VEX even in its 16-wide configuration for a number of data-intensive application kernels.
|
A disciplined approach to CI generation for extensible processors is found in @cite_19 where the Xtensa processor http: www.tensilica.com is augmented with CIs that may combine VLIW, SIMD or fused (chained) operations. Although the Xtensa framework is highly automated and feature-rich, simultaneous generation of disjoint optimal MIMO CIs is not considered; instead the CI generation process is divided in distinct stages with different objectives. CI generation is also used for designing custom coprocessors (ARM OptimoDE @cite_27 ) or two-input one-output functional units (MIPS CorExtend @cite_2 ) with internal register storage @cite_16 .
|
{
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_16",
"@cite_2"
],
"mid": [
"2075921636",
"2155509874",
"2171594157",
"183663385"
],
"abstract": [
"An application-specific instruction-set processor (ASIP) is ideally suited for embedded applications that have demanding performance, size, and power requirements that cannot be satisfied by a general purpose processor. ASIPs also have time-to-market and programmability advantages when compared to custom ASICs. The AutoTIE system simplifies the creation of ASIPs by automatically enhancing a base processor with application specific instruction set architecture (ISA) extensions, including instructions, operations, and register files. The new instructions, operations, and register files are automatically recognized and exploited by the entire software tool chain, including the C C++ compiler. Thus, taking advantage of the generated ASIP does not require any changes to the application or any assembly language coding. AutoTIE uses the C C++ compiler to analyze an application, and based on the analysis generates thousands, or even millions, of possible ISA extensions for the application. AutoTIE then uses performance and hardware estimation techniques to combine the ISA extensions into a large number of potential ASIPs, and for a range of hardware costs, chooses the ASIP that provides the maximum performance improvement. For example, for an application performing a radix-4 FFT, AutoTIE considers over 34,000 potential sets of ISA extensions. For hardware costs ranging from 7800 gates to 128,000 gates, AutoTIE combines these extensions to form 31 ASIPs, which provide performance improvements ranging from a factor of 1.12 to a factor of 11.3 compared to a general-purpose processor.",
"Instruction set customization is an effective way to improve processor performance. Critical portions of applicationdata-flow graphs are collapsed for accelerated execution on specialized hardware. Collapsing dataflow subgraphs will compress the latency along critical paths and reduces the number of intermediate results stored in the register file. While custom instructions can be effective, the time and cost of designing a new processor for each application is immense. To overcome this roadblock, this paper proposes a flexible architectural framework to transparently integrate custom instructions into a general-purpose processor. Hardware accelerators are added to the processor to execute the collapsed subgraphs. A simple microarchitectural interface is provided to support a plug-and-play model for integrating a wide range of accelerators into a pre-designed and verified processor core. The accelerators are exploited using an approach of static identification and dynamic realization. The compiler is responsible for identifying profitable subgraphs, while the hardware handles discovery, mapping, and execution of compatible subgraphs. This paper presents the design of a plug-and-play transparent accelerator system and evaluates the cost performance implications of the design.",
"Design tools for application specific instruction set processors (ASIPs) are an important discipline in systems-level design for wireless communications and other embedded application areas. Some ASIPs are still designed completely from scratch to meet extreme efficiency demands. However, there is also a trend, towards use of partially predefined, configurable RISC-like embedded processor cores that can be quickly tuned to given applications by means of instruction set extension (ISE) techniques. While the problem of optimized ISE synthesis has been studied, well from a theoretical perspective, there are still few approaches to an overall HW SW design flow for configurable cores that take all real-life constraints into account In this paper, we therefore present a novel procedure for automated ISE synthesis that accommodates both user-specified and processor-specific constraints in a flexible wary and that produces value optimized ISE solutions in-short time. Driven by an advanced application C code analysis profiling frontend, the ISE synthesis core algorithm is embedded into a complete, design flow, where the backend is formed by a state-of-the-art industrial tool for processor configuration, ISE IIW synthesis, and SW tool retargeting. The proposed, design flow, including ISE synthesis, is demonstrated, via several benchmarks for the MIPS CorExtend configurable RISC processor platform.",
""
]
}
|
1403.6632
|
1440974867
|
In this paper, the ByoRISC (Build your own RISC) configurable application-specific instruction-set processor (ASIP) family is presented. ByoRISCs, as vendor-independent cores, provide extensive architectural parameters over a baseline processor, which can be customized by application-specific hardware extensions (ASHEs). Such extensions realize multi-input multi-output (MIMO) custom instructions with local state and load store accesses to the data memory. ByoRISCs incorporate a true multi-port register file, zero-overhead custom instruction decoding, and scalable data forwarding mechanisms. Given these design decisions, ByoRISCs provide a unique combination of features that allow their use as architectural testbeds and the seamless and rapid development of new high-performance ASIPs. The performance characteristics of ByoRISCs, implemented as vendor-independent cores, have been evaluated for both ASIC and FPGA implementations, and it is proved that they provide a viable solution in FPGA-based system-on-a-chip design. A case study of an image processing pipeline is also presented to highlight the process of utilizing a ByoRISC custom processor. A peak performance speedup of up to 8.5 @math can be observed, whereas an average performance speedup of 4.4 @math on Xilinx Virtex-4 targets is achieved. In addition, ByoRISC outperforms an experimental VLIW architecture named VEX even in its 16-wide configuration for a number of data-intensive application kernels.
|
MOLEN @cite_29 is a relevant approach that extends a basic architecture (PowerPC) with new instructions to interface and configure a number of loosely-coupled custom computation units. While MOLEN permits the simultaneous operation of the processor core and these units, it is not usable for optimizing fine-grain program regions. For the specific coprocessor paradigm used, the control data communication overhead often prohibits the implementation of useful extensions for irregular code.
|
{
"cite_N": [
"@cite_29"
],
"mid": [
"2146188244"
],
"abstract": [
"In this paper, we present a polymorphic processor paradigm incorporating both general-purpose and custom computing processing. The proposal incorporates an arbitrary number of programmable units, exposes the hardware to the programmers designers, and allows them to modify and extend the processor functionality at will. To achieve the previously stated attributes, we present a new programming paradigm, a new instruction set architecture, a microcode-based microarchitecture, and a compiler methodology. The programming paradigm, in contrast with the conventional programming paradigms, allows general-purpose conventional code and hardware descriptions to coexist in a program: In our proposal, for a given instruction set architecture, a onetime instruction set extension of eight instructions, is sufficient to implement the reconfigurable functionality of the processor. We propose a microarchitecture based on reconfigurable hardware emulation to allow high-speed reconfiguration and execution. To prove the viability of the proposal, we experimented with the MPEG-2 encoder and decoder and a Xilinx Virtex II Pro FPGA. We have implemented three operations, SAD, DCT, and IDCT. The overall attainable application speedup for the MPEG-2 encoder and decoder is between 2.64-3.18 and between 1.56-1.94, respectively, representing between 93 percent and 98 percent of the theoretically obtainable speedups."
]
}
|
1403.6632
|
1440974867
|
In this paper, the ByoRISC (Build your own RISC) configurable application-specific instruction-set processor (ASIP) family is presented. ByoRISCs, as vendor-independent cores, provide extensive architectural parameters over a baseline processor, which can be customized by application-specific hardware extensions (ASHEs). Such extensions realize multi-input multi-output (MIMO) custom instructions with local state and load store accesses to the data memory. ByoRISCs incorporate a true multi-port register file, zero-overhead custom instruction decoding, and scalable data forwarding mechanisms. Given these design decisions, ByoRISCs provide a unique combination of features that allow their use as architectural testbeds and the seamless and rapid development of new high-performance ASIPs. The performance characteristics of ByoRISCs, implemented as vendor-independent cores, have been evaluated for both ASIC and FPGA implementations, and it is proved that they provide a viable solution in FPGA-based system-on-a-chip design. A case study of an image processing pipeline is also presented to highlight the process of utilizing a ByoRISC custom processor. A peak performance speedup of up to 8.5 @math can be observed, whereas an average performance speedup of 4.4 @math on Xilinx Virtex-4 targets is achieved. In addition, ByoRISC outperforms an experimental VLIW architecture named VEX even in its 16-wide configuration for a number of data-intensive application kernels.
|
Recent work in @cite_21 allows the extension of a processor by a unit that can execute custom functionalities with up to six inputs and three outputs. This approach steps forward from the Nios-II limitation, however it is still affected by the limited macroinstruction encoding space; ByoRISC overcomes this problem by the usage of an intrinsic decoding phase.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2523456914"
],
"abstract": [
"Recent study shows that a further speedup can be achieved by RISC-based extensible processors if the incorporated custom functional units (CFUs) can execute functions with more than two inputs and one output. However, mechanisms to execute multiple-input, multiple-output (MIMO) custom functions in a RISC processor have not been addressed. This paper proposes an extension for single-issue RISC processors based on a CFU that can execute custom functions with up to six inputs and three outputs. To minimize the change to the core processor, we maintain the operand bandwidth of two inputs, one output per cycle and transfer the extra operands and results using repeated custom instructions. While keeping such an limit sacrifices some speedup, our experiments show that the MIMO extension can still achieve an average 51 increase in speedup compared to a dual-input, single-output (DISO) extension and an average 27 increase in speedup compared to a multiple-input, single-output (MISO) extension."
]
}
|
1403.6676
|
2151460988
|
In Bitcoin, transaction malleability describes the fact that the signatures that prove the ownership of bitcoins being transferred in a transaction do not provide any integrity guarantee for the signatures themselves. This allows an attacker to mount a malleability attack in which it intercepts, modifies, and rebroadcasts a transaction, causing the transaction issuer to believe that the original transaction was not confirmed. In February 2014 MtGox, once the largest Bitcoin exchange, closed and filed for bankruptcy claiming that attackers used malleability attacks to drain its accounts. In this work we use traces of the Bitcoin network for over a year preceding the filing to show that, while the problem is real, there was no widespread use of malleability attacks before the closure of MtGox.
|
@cite_5 @cite_1 mention transaction malleability as a potential problem in contracts and two party computations based on Bitcoin transactions. These schemes can be used for example to implement a fair coin toss @cite_9 , auctions or decentralized voting. Their method to eliminate transaction malleability in their protocols resembles our construction of conflict sets, i.e., eliminating malleable parts of the transaction in the hash calculation. However, they limit their observations to advanced schemes for encoding contracts and two party computations.
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_1"
],
"mid": [
"2949204748",
"1512699878",
"1599549092"
],
"abstract": [
"",
"In this short note we show that the Bitcoin network can allow remote parties to gamble with their bitcoins by tossing a fair or biased coin, with no need for a trusted party, and without the possibility of extortion by dishonest parties who try to abort. The superfluousness of having a trusted party implies that there is no house edge, as is the case with centralized services that are supposed to generate a profit.",
"BitCoin transactions are malleable in a sense that given a transaction an adversary can easily construct an equivalent transaction which has a different hash. This can pose a serious problem in some BitCoin distributed contracts in which changing a transaction's hash may result in the protocol disruption and a financial loss. The problem mostly concerns protocols, which use a \"refund\" transaction to withdraw a deposit in a case of the protocol interruption. In this short note, we show a general technique for creating malleability-resilient \"refund\" transactions, which does not require any modification of the BitCoin protocol. Applying our technique to our previous paper \"Fair Two-Party Computations via the BitCoin Deposits\" (Cryptology ePrint Archive, 2013) allows to achieve fairness in any Two-Party Computation using the BitCoin protocol in its current version."
]
}
|
1403.6676
|
2151460988
|
In Bitcoin, transaction malleability describes the fact that the signatures that prove the ownership of bitcoins being transferred in a transaction do not provide any integrity guarantee for the signatures themselves. This allows an attacker to mount a malleability attack in which it intercepts, modifies, and rebroadcasts a transaction, causing the transaction issuer to believe that the original transaction was not confirmed. In February 2014 MtGox, once the largest Bitcoin exchange, closed and filed for bankruptcy claiming that attackers used malleability attacks to drain its accounts. In this work we use traces of the Bitcoin network for over a year preceding the filing to show that, while the problem is real, there was no widespread use of malleability attacks before the closure of MtGox.
|
A related class of doublespending attacks, which we shall refer to as classical doublespending, has received far more attention. In this class of attacks the transaction issuer creates two transactions to defraud the receiving party. @cite_3 first studied the problem of arising from fast transactions, i.e., accepting non-confirmed transactions. Rosenfeld @cite_12 showed that the success probability of a doublespending attack can be further increased if coupled with computational resources. @cite_2 later improved the security of accepting fast payments by observing how transactions are propagated in the network.
|
{
"cite_N": [
"@cite_12",
"@cite_3",
"@cite_2"
],
"mid": [
"1593596963",
"2949515390",
"2059272164"
],
"abstract": [
"Bitcoin is the world's first decentralized digital currency. Its main technical innovation is the use of a blockchain and hash-based proof of work to synchronize transactions and prevent double-spending the currency. While the qualitative nature of this system is well understood, there is widespread confusion about its quantitative aspects and how they relate to attack vectors and their countermeasures. In this paper we take a look at the stochastic processes underlying typical attacks and their resulting probabilities of success.",
"Bitcoin is a decentralized payment system that is based on Proof-of-Work. Bitcoin is currently gaining popularity as a digital currency; several businesses are starting to accept Bitcoin transactions. An example case of the growing use of Bitcoin was recently reported in the media; here, Bitcoins were used as a form of fast payment in a local fast-food restaurant. In this paper, we analyze the security of using Bitcoin for fast payments, where the time between the exchange of currency and goods is short (i.e., in the order of few seconds). We focus on doublespending attacks on fast payments and demonstrate that these attacks can be mounted at low cost on currently deployed versions of Bitcoin. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast transactions are not always effective in resisting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we leverage on our findings and propose a lightweight countermeasure that enables the detection of doublespending attacks in fast transactions.",
"Cashless payments are nowadays ubiquitous and decentralized digital currencies like Bitcoin are increasingly used as means of payment. However, due to the delay of the transaction confirmation in Bitcoin, it is not used for payments that rely on quick transaction confirmation. We present a concept that addresses this drawback of Bitcoin and allows it to be used for fast transactions. We evaluate the performance of the concept using double-spending attacks and show that, employing our concept, the success of such attacks diminishes to less than 0.09 . Moreover, we present a real world application: We modified a snack vending machine to accept Bitcoin payments and make use of fast transaction confirmations."
]
}
|
1403.5945
|
2184604592
|
An additive 2-basis with range n is restricted if its largest element is n 2. Among the restricted 2-bases of given length k, the ones that have the greatest range are extremal restricted. We describe an algorithm that finds the extremal restricted 2-bases of a given length, and we list them for lengths up to k = 41.
|
Riddell and Chan discuss the connection between symmetric and restricted bases @cite_2 . Mossige notes that symmetric bases @math can be efficiently searched by scanning through admissible bases of length @math @cite_0 . For symmetric bases this is sufficient; the second half of a symmetric basis is a mirror image of the first half, and then Rohrbach's theorem ensures that the constructed set @math is a basis for @math . For asymmetric restricted bases, a similar search can be conducted separately for the two halves of the basis (prefix and suffix). However, since Rohrbach's theorem does not apply to asymmetric bases, the construction does not automatically yield a basis for @math . This must be checked separately.
|
{
"cite_N": [
"@cite_0",
"@cite_2"
],
"mid": [
"2058902146",
"1999029321"
],
"abstract": [
"New algorithms, based on a very efficient method to compute the h-range, have been used to extend known tables of the extremal h-range, to complete the solution in the case k = 3, and to find a lower bound for the extremal 2-range.",
"By means of a computer search, some extremal additive bases have been constructed which have heretofore been unknown. A set A: 0 = a1 < a2 < * < ak of integers is called a 2-basis for n if each of 0, 1, 2, ... , n can be represented as the sum of two summands from A, with repetition of summands allowed. If the elements of A do not exceed n 2, we call A a restricted 2-basis for n. A more general idea is that of an h-basis, where h summands are used in the representation. However, in this note we shall be dealing only with 2-bases, which we may refer to more briefly as \"bases\". Rohrbach [5] considered the function n(k) = n2(k), which is the largest integer n for which a basis of k elements exists. An extremal basis is a basis of k elements for n(k). Our object here is to provide a list of extremal bases for 2 < k < 14, most of which have been unknown heretofore. We shall also compare the results of some basis constructions made by Rohrbach. First we give some facts about n(k). By a simple combinatorial argument one obtains"
]
}
|
1403.5945
|
2184604592
|
An additive 2-basis with range n is restricted if its largest element is n 2. Among the restricted 2-bases of given length k, the ones that have the greatest range are extremal restricted. We describe an algorithm that finds the extremal restricted 2-bases of a given length, and we list them for lengths up to k = 41.
|
The final ingredient is the gaps test'' by Challis @cite_6 . Based on a simple combinatorial argument, it prunes the search tree of admissible bases, if they are required to have a range of at least a given target value @math . In section we shall prove lower bounds for the ranges of the prefix and the mirrored suffix. With these lower bounds the gaps test prunes the search tree very efficiently.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2017959893"
],
"abstract": [
"A k = 1, a 2 ,..., a k is an h-basis for n if every positive integer c n can be expressed as the sum of more than h values a i ; an extremal h-basis A k is one for which n is as large as possible. Computing such bases has become known as the Postage Stamp Problem, and this paper describes two new techniques-one appropriate for large k, and the other for large h-which help extend known results in both dimensions. The results themselves are presented as an Appendix"
]
}
|
1403.5598
|
1620650588
|
Wyner's elegant model of wiretap channel exploits noise in the communication channel to provide perfect secrecy against a computationally unlimited eavesdropper without requiring a shared key. We consider an adversarial model of wiretap channel proposed in [18,19] where the adversary is active: it selects a fraction @math of the transmitted codeword to eavesdrop and a fraction @math of the codeword to corrupt by "adding" adversarial error. It was shown that this model also captures network adversaries in the setting of 1-round Secure Message Transmission [8]. It was proved that secure communication (1-round) is possible if and only if @math . In this paper we show that by allowing communicants to have access to a public discussion channel (authentic communication without secrecy) secure communication becomes possible even if @math . We formalize the model of protocol and for two efficiency measures, information rate and message round complexity derive tight bounds. We also construct a rate optimal protocol family with minimum number of message rounds. We show application of these results to Secure Message Transmission with Public Discussion (SMT-PD), and in particular show a new lower bound on transmission rate of these protocols together with a new construction of an optimal SMT-PD protocol.
|
Maurer's @cite_18 introduced channels first in the context of over wiretap channels; this was also independently considered in @cite_20 . Since the channel is considered free, the established key can be used to send the message securely over this channel and so the communication cost of the message transmission will stay the same as that of the key establishment. In a key agreement protocol, the goal of Alice and Bob is to generate a common random key.
|
{
"cite_N": [
"@cite_18",
"@cite_20"
],
"mid": [
"60616127",
"2108777864"
],
"abstract": [
"Consider the following scenario: Alice and Bob, two parties who share no secret key initially but whose goal it is to generate a (large amount of) information-theoretically secure (or unconditionally secure) shared secret key, are connected only by an insecure public channel to which an eavesdropper Eve has perfect (read) access. Moreover, there exists a satelite broadcasting random bits at a very low signal power. Alice and Bob can receive these bits with certain bit error probabilities ?A and ?B, respectively (e.g. ?A = ?B = 30 ) while Eve is assumed to receive the same bits much more reliably with bit error probability ?E ? ?A, ?B (e.g. ?E = 1 ). The errors on the three channels are assumed to occur at least partially independently. Practical protocols are discussed by which Alice and Bob can generate a secret key despite the facts that Eve possesses more information than both of them and is assumed to have unlimited computational resources as well as complete knowledge of the protocols.The described scenario is a special case of a much more general setup in which Alice, Bob and Eve are assumed to know random variables X, Y and Z jointly distributed according to some probability distribution PXYZ, respectively. The results of this paper suggest to build cryptographic systems that are provably secure against enemies with unlimited computing power under realistic assumptions about the partial independence of the noise on the involved communication channels.",
"As the first part of a study of problems involving common randomness at distance locations, information-theoretic models of secret sharing (generating a common random key at two terminals, without letting an eavesdropper obtain information about this key) are considered. The concept of key-capacity is defined. Single-letter formulas of key-capacity are obtained for several models, and bounds to key-capacity are derived for other models. >"
]
}
|
1403.5598
|
1620650588
|
Wyner's elegant model of wiretap channel exploits noise in the communication channel to provide perfect secrecy against a computationally unlimited eavesdropper without requiring a shared key. We consider an adversarial model of wiretap channel proposed in [18,19] where the adversary is active: it selects a fraction @math of the transmitted codeword to eavesdrop and a fraction @math of the codeword to corrupt by "adding" adversarial error. It was shown that this model also captures network adversaries in the setting of 1-round Secure Message Transmission [8]. It was proved that secure communication (1-round) is possible if and only if @math . In this paper we show that by allowing communicants to have access to a public discussion channel (authentic communication without secrecy) secure communication becomes possible even if @math . We formalize the model of protocol and for two efficiency measures, information rate and message round complexity derive tight bounds. We also construct a rate optimal protocol family with minimum number of message rounds. We show application of these results to Secure Message Transmission with Public Discussion (SMT-PD), and in particular show a new lower bound on transmission rate of these protocols together with a new construction of an optimal SMT-PD protocol.
|
The scheme is information-theoretic security. Key agreement protocol in @cite_18 is over the binary symmetic channel and @cite_20 is over discrete memoryless channel. The adversary is passive and only evasdrop the transmission, which is same as wiretap channel with passive adversary.
|
{
"cite_N": [
"@cite_18",
"@cite_20"
],
"mid": [
"60616127",
"2108777864"
],
"abstract": [
"Consider the following scenario: Alice and Bob, two parties who share no secret key initially but whose goal it is to generate a (large amount of) information-theoretically secure (or unconditionally secure) shared secret key, are connected only by an insecure public channel to which an eavesdropper Eve has perfect (read) access. Moreover, there exists a satelite broadcasting random bits at a very low signal power. Alice and Bob can receive these bits with certain bit error probabilities ?A and ?B, respectively (e.g. ?A = ?B = 30 ) while Eve is assumed to receive the same bits much more reliably with bit error probability ?E ? ?A, ?B (e.g. ?E = 1 ). The errors on the three channels are assumed to occur at least partially independently. Practical protocols are discussed by which Alice and Bob can generate a secret key despite the facts that Eve possesses more information than both of them and is assumed to have unlimited computational resources as well as complete knowledge of the protocols.The described scenario is a special case of a much more general setup in which Alice, Bob and Eve are assumed to know random variables X, Y and Z jointly distributed according to some probability distribution PXYZ, respectively. The results of this paper suggest to build cryptographic systems that are provably secure against enemies with unlimited computing power under realistic assumptions about the partial independence of the noise on the involved communication channels.",
"As the first part of a study of problems involving common randomness at distance locations, information-theoretic models of secret sharing (generating a common random key at two terminals, without letting an eavesdropper obtain information about this key) are considered. The concept of key-capacity is defined. Single-letter formulas of key-capacity are obtained for several models, and bounds to key-capacity are derived for other models. >"
]
}
|
1403.5598
|
1620650588
|
Wyner's elegant model of wiretap channel exploits noise in the communication channel to provide perfect secrecy against a computationally unlimited eavesdropper without requiring a shared key. We consider an adversarial model of wiretap channel proposed in [18,19] where the adversary is active: it selects a fraction @math of the transmitted codeword to eavesdrop and a fraction @math of the codeword to corrupt by "adding" adversarial error. It was shown that this model also captures network adversaries in the setting of 1-round Secure Message Transmission [8]. It was proved that secure communication (1-round) is possible if and only if @math . In this paper we show that by allowing communicants to have access to a public discussion channel (authentic communication without secrecy) secure communication becomes possible even if @math . We formalize the model of protocol and for two efficiency measures, information rate and message round complexity derive tight bounds. We also construct a rate optimal protocol family with minimum number of message rounds. We show application of these results to Secure Message Transmission with Public Discussion (SMT-PD), and in particular show a new lower bound on transmission rate of these protocols together with a new construction of an optimal SMT-PD protocol.
|
In wiretap channel with public discussion, once a shared key is established, secure message transmission can be achieved by encrypting the message and sending it over the channel. Since the channel can be freely used, the communication cost of the message transmission will stay the same as that of the key establishment. Our construction also has two steps: a key establishment, followed by encrypting the message and sending it over the public discussion channel. This is also the approach in @cite_8 (Protocol I) and @cite_15 .
|
{
"cite_N": [
"@cite_15",
"@cite_8"
],
"mid": [
"2139127819",
"2010825791"
],
"abstract": [
"In a secure message transmission (SMT) scenario, a sender wants to send a message in a private and reliable way to a receiver. Sender and receiver are connected by n wires, t of which can be controlled by an adaptive adversary with unlimited computational resources. In Eurocrypt 2008, Garay and Ostrovsky considered an SMT scenario where sender and receiver have access to a public discussion channel and showed that secure and reliable communication is possible when n ≥ t + 1. In this paper, we will show that a secure protocol requires at least three rounds of communication and two rounds invocation of the public channel and hence give a complete answer to the open question raised by Garay and Ostrovsky. We also describe a round optimal protocol that has constant transmission rate over the public channel.",
"In the problem of secure message transmission in the public discussion model (SMT-PD), a sender wants to send a message MS ∈ 0,1 l to a receiver privately and reliably. Sender and receiver are connected by n channels, also known as simple wires, up to of which may be maliciously controlled by a computationally unbounded adversary, as well as one public channel, which is reliable but not private. The SMT-PD abstraction has been shown instrumental in achieving secure multiparty computation on sparse networks, where a subset of the nodes are able to realize a broadcast functionality, which plays the role of the public channel. In this paper, we present the first SMT-PD protocol with sublinear (i.e., logarithmic in l, the message length) communication on the public channel. In addition, the protocol incurs a simple-wire communication complexity of O(ln n-t), which, as we also show, is optimal. By contrast, the best known bounds in both public and simple channels were linear. Furthermore, our protocol has an optimal round complexity of (3, 2), meaning three rounds, two of which must invoke the public channel. Finally, we ask the question whether some of the lower bounds on resource use for a single execution of SMT-PD can be beaten on average through amortization. In other words, if sender and receiver must send several messages back and forth (where later messages depend on earlier ones), can they do better than the naive solution of repeating an SMT-PD protocol each time? We show that amortization can indeed drastically reduce the use of the public channel; it is possible to limit the total number of uses of the public channel to two, no matter how many messages are ultimately sent between two nodes. (Since two uses of the public channel are required to send any reliable communication whatsoever, this is the best possible.)."
]
}
|
1403.5598
|
1620650588
|
Wyner's elegant model of wiretap channel exploits noise in the communication channel to provide perfect secrecy against a computationally unlimited eavesdropper without requiring a shared key. We consider an adversarial model of wiretap channel proposed in [18,19] where the adversary is active: it selects a fraction @math of the transmitted codeword to eavesdrop and a fraction @math of the codeword to corrupt by "adding" adversarial error. It was shown that this model also captures network adversaries in the setting of 1-round Secure Message Transmission [8]. It was proved that secure communication (1-round) is possible if and only if @math . In this paper we show that by allowing communicants to have access to a public discussion channel (authentic communication without secrecy) secure communication becomes possible even if @math . We formalize the model of protocol and for two efficiency measures, information rate and message round complexity derive tight bounds. We also construct a rate optimal protocol family with minimum number of message rounds. We show application of these results to Secure Message Transmission with Public Discussion (SMT-PD), and in particular show a new lower bound on transmission rate of these protocols together with a new construction of an optimal SMT-PD protocol.
|
The model of adversarial wiretap in @cite_9 @cite_1 extends wiretap II to include active (jamming) adversarial noise.
|
{
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2027174613",
"2963230393"
],
"abstract": [
"Channels with adversarial errors have been widely considered in recent years. In this paper we propose a new type of adversarial channel that is defined by two parameters ρr and ρw, specifying the read and write power of the adversary: for a codeword of length n, adversary can read ρrn components and add an error vector of weight up to ρwn to the codeword. We give our motivations, define performance criteria for codes that provide reliable communication over these channels, and describe two constructions, one deterministic and one probabilistic, for these codes. We discuss our results and outline our direction for future research.",
"We introduce randomized Limited View (LV) adversary codes that provide protection against an adversary that uses their partial view of the channel to construct an adversarial error vector that is added to the channel. For a codeword of length N, the adversary selects a subset of size ρrN of components to “see”, and then “adds” an adversarial error vector of weight ρwN to the codeword. Performance of the code is measured by the probability of the decoder failure in recovering the sent message. An (N, qRN, δ)-limited view adversary is a code of rate R that ensures that the success chance of the adversary in making decoder to fail is bounded by δ. Our main motivation to study these codes is providing protection for wireless communication at the physical layer of networks. We formalize the definition of adversarial error and decoder failure, construct a code with efficient encoding and decoding that allows the adversary to, depending on the code rate, read up to half of the sent codeword and add error on the same coordinates. The code is non-linear, has an efficient decoding algorithm, and is constructed using a message authentication code (MAC) and a Folded Reed-Solomon (FRS) code. The decoding algorithm uses an innovative approach that combines the list decoding algorithm of the FRS codes and the MAC verification algorithm to eliminate the exponential size of the list output from the decoding algorithm. We discuss our results and future work."
]
}
|
1403.5598
|
1620650588
|
Wyner's elegant model of wiretap channel exploits noise in the communication channel to provide perfect secrecy against a computationally unlimited eavesdropper without requiring a shared key. We consider an adversarial model of wiretap channel proposed in [18,19] where the adversary is active: it selects a fraction @math of the transmitted codeword to eavesdrop and a fraction @math of the codeword to corrupt by "adding" adversarial error. It was shown that this model also captures network adversaries in the setting of 1-round Secure Message Transmission [8]. It was proved that secure communication (1-round) is possible if and only if @math . In this paper we show that by allowing communicants to have access to a public discussion channel (authentic communication without secrecy) secure communication becomes possible even if @math . We formalize the model of protocol and for two efficiency measures, information rate and message round complexity derive tight bounds. We also construct a rate optimal protocol family with minimum number of message rounds. We show application of these results to Secure Message Transmission with Public Discussion (SMT-PD), and in particular show a new lower bound on transmission rate of these protocols together with a new construction of an optimal SMT-PD protocol.
|
Other models of adversarial wiretap, @cite_21 @cite_25 @cite_22 @cite_2 , and their relationship to the model considered here, have been discussed in @cite_0 . The paper also establishes a relation between a special subset of adversarial wiretap codes and 1-round Secure Message Transmission (SMT) @cite_16 protocols for networks. In SMT, Alice and Bob are connected by @math node disjoint communication paths in a network, a subset of which is controlled by an adversary who can see and arbitrarily change the transmissions over the controlled paths. SMT protocols provide security and reliability against this adversary.
|
{
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_0",
"@cite_2",
"@cite_16",
"@cite_25"
],
"mid": [
"2115321613",
"",
"2123658733",
"",
"2092148937",
"1982581119"
],
"abstract": [
"We investigate the effect of certain active attacks on the secrecy capacity of wiretap channels by considering arbitrarily varying wiretap channels. We establish a lower bound for the secrecy capacity with randomized coding of a class of such channels and an upper bound for that of all such channels. We show that if the arbitrarily varying wiretap channel possesses a bad “averaged” state, namely one in which the legitimate receiver is degraded with respect to the eavesdropper, then secure communication is not possible.",
"",
"In this work, the critical role of noisy feedback in enhancing the secrecy capacity of the wiretap channel is established. Unlike previous works, where a noiseless public discussion channel is used for feedback, the feed-forward and feedback signals share the same noisy channel in the present model. Quite interestingly, this noisy feedback model is shown to be more advantageous in the current setting. More specifically, the discrete memoryless modulo-additive channel with a full-duplex destination node is considered first, and it is shown that the judicious use of feedback increases the secrecy capacity to the capacity of the source-destination channel in the absence of the wiretapper. In the achievability scheme, the feedback signal corresponds to a private key, known only to the destination. In the half-duplex scheme, a novel feedback technique that always achieves a positive perfect secrecy rate (even when the source-wiretapper channel is less noisy than the source-destination channel) is proposed. These results hinge on the modulo-additive property of the channel, which is exploited by the destination to perform encryption over the channel without revealing its key to the source. Finally, this scheme is extended to the continuous real valued modulo-Lambda channel where it is shown that the secrecy capacity with feedback is also equal to the capacity in the absence of the wiretapper.",
"",
"This paper studies the problem of perfectly secure communication in general network in which processors and communication lines may be faulty. Lower bounds are obtained on the connectivity required for successful secure communication. Efficient algorithms are obtained that operate with this connectivity and rely on no complexity-theoretic assumptions. These are the first algorithms for secure communication in a general network to simultaneously achieve the three goals of perfect secrecy, perfect resiliency, and worst-case time linear in the diameter of the network.",
"The classical wiretap channel models secure communication in the presence of a nonlegitimate wiretapper who has to be kept ignorant. Traditionally, the wiretapper is passive in the sense that he only tries to eavesdrop the communication using his received channel output. In this paper, more powerful active wiretappers are studied. In addition to eavesdropping, these wiretappers are able to influence the communication conditions of all users by controlling the corresponding channel states. Since legitimate transmitters and receivers do not know the actual channel realization or the wiretapper's strategy of influencing the channel states, they are confronted with arbitrarily varying channel (AVC) conditions. The corresponding secure communication scenario is, therefore, given by the arbitrarily varying wiretap channel (AVWC). In the context of AVCs, common randomness (CR) has been shown to be an important resource for establishing reliable communication, in particular, if the AVC is symmetrizable. But availability of CR also affects the strategy space of an active wiretapper as he may or may not exploit the common randomness for selecting the channel states. Several secrecy capacity results are derived for the AVWC. In particular, the CR-assisted secrecy capacity of the AVWC with an active wiretapper exploiting CR is established and analyzed in detail. Finally, it is demonstrated for active wiretappers how two orthogonal AVWCs, each useless for transmission of secure messages, can be super-activated to a useful channel allowing for secure communication at nonzero secrecy rates. To the best of our knowledge, this is not possible for passive wiretappers and, further, provides the first example of such super-activation, which has been expected to appear only in the area of quantum communication. Such knowledge is particularly important as it provides valuable insights for the design and the medium access control of future wireless communication systems."
]
}
|
1403.5598
|
1620650588
|
Wyner's elegant model of wiretap channel exploits noise in the communication channel to provide perfect secrecy against a computationally unlimited eavesdropper without requiring a shared key. We consider an adversarial model of wiretap channel proposed in [18,19] where the adversary is active: it selects a fraction @math of the transmitted codeword to eavesdrop and a fraction @math of the codeword to corrupt by "adding" adversarial error. It was shown that this model also captures network adversaries in the setting of 1-round Secure Message Transmission [8]. It was proved that secure communication (1-round) is possible if and only if @math . In this paper we show that by allowing communicants to have access to a public discussion channel (authentic communication without secrecy) secure communication becomes possible even if @math . We formalize the model of protocol and for two efficiency measures, information rate and message round complexity derive tight bounds. We also construct a rate optimal protocol family with minimum number of message rounds. We show application of these results to Secure Message Transmission with Public Discussion (SMT-PD), and in particular show a new lower bound on transmission rate of these protocols together with a new construction of an optimal SMT-PD protocol.
|
SMT-PD was introduced in @cite_19 as a building block in almost-everywhere secure multiparty computation. Bounds on the required number of rounds were derived in @cite_15 . In @cite_8 a bound on transmission rate over wires (not including communication over the ) was derived. The paper presents two constructions: protocol I is optimal in the sense that the transmission rate is of the order of the bound as the number of wire increases , and protocol II in which the goal is to minimize communication over the . This reduction is however at the expense of lower rate on the wires. I Table compares the information rate of these constructions for large @math .
|
{
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_8"
],
"mid": [
"",
"2139127819",
"2010825791"
],
"abstract": [
"",
"In a secure message transmission (SMT) scenario, a sender wants to send a message in a private and reliable way to a receiver. Sender and receiver are connected by n wires, t of which can be controlled by an adaptive adversary with unlimited computational resources. In Eurocrypt 2008, Garay and Ostrovsky considered an SMT scenario where sender and receiver have access to a public discussion channel and showed that secure and reliable communication is possible when n ≥ t + 1. In this paper, we will show that a secure protocol requires at least three rounds of communication and two rounds invocation of the public channel and hence give a complete answer to the open question raised by Garay and Ostrovsky. We also describe a round optimal protocol that has constant transmission rate over the public channel.",
"In the problem of secure message transmission in the public discussion model (SMT-PD), a sender wants to send a message MS ∈ 0,1 l to a receiver privately and reliably. Sender and receiver are connected by n channels, also known as simple wires, up to of which may be maliciously controlled by a computationally unbounded adversary, as well as one public channel, which is reliable but not private. The SMT-PD abstraction has been shown instrumental in achieving secure multiparty computation on sparse networks, where a subset of the nodes are able to realize a broadcast functionality, which plays the role of the public channel. In this paper, we present the first SMT-PD protocol with sublinear (i.e., logarithmic in l, the message length) communication on the public channel. In addition, the protocol incurs a simple-wire communication complexity of O(ln n-t), which, as we also show, is optimal. By contrast, the best known bounds in both public and simple channels were linear. Furthermore, our protocol has an optimal round complexity of (3, 2), meaning three rounds, two of which must invoke the public channel. Finally, we ask the question whether some of the lower bounds on resource use for a single execution of SMT-PD can be beaten on average through amortization. In other words, if sender and receiver must send several messages back and forth (where later messages depend on earlier ones), can they do better than the naive solution of repeating an SMT-PD protocol each time? We show that amortization can indeed drastically reduce the use of the public channel; it is possible to limit the total number of uses of the public channel to two, no matter how many messages are ultimately sent between two nodes. (Since two uses of the public channel are required to send any reliable communication whatsoever, this is the best possible.)."
]
}
|
1403.5996
|
2949570137
|
We study size-based schedulers, and focus on the impact of inaccurate job size information on response time and fairness. Our intent is to revisit previous results, which allude to performance degradation for even small errors on job size estimates, thus limiting the applicability of size-based schedulers. We show that scheduling performance is tightly connected to workload characteristics: in the absence of large skew in the job size distribution, even extremely imprecise estimates suffice to outperform size-oblivious disciplines. Instead, when job sizes are heavily skewed, known size-based disciplines suffer. In this context, we show -- for the first time -- the dichotomy of over-estimation versus under-estimation. The former is, in general, less problematic than the latter, as its effects are localized to individual jobs. Instead, under-estimation leads to severe problems that may affect a large number of jobs. We present an approach to mitigate these problems: our technique requires no complex modifications to original scheduling policies and performs very well. To support our claim, we proceed with a simulation-based evaluation that covers an unprecedented large parameter space, which takes into account a variety of synthetic and real workloads. As a consequence, we show that size-based scheduling is practical and outperforms alternatives in a wide array of use-cases, even in presence of inaccurate size information.
|
Wierman and Nuyens @cite_20 provide analytical results for a class of size-based policies, but consider an impractical assumption: results depend on a bound on estimation error. In the common case where most estimations are close to the real value but there are outliers, bounds need to be set according to outliers, leading to pessimistic predictions on performance. In our work, instead, we do not impose any bound on the error.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"2061388144"
],
"abstract": [
"Motivated by the optimality of Shortest Remaining Processing Time (SRPT) for mean response time, in recent years many computer systems have used the heuristic of \"favoring small jobs\" in order to dramatically reduce user response times. However, rarely do computer systems have knowledge of exact remaining sizes. In this paper, we introduce the class of e-SMART policies, which formalizes the heuristic of \"favoring small jobs\" in a way that includes a wide range of policies that schedule using inexact job-size information. Examples of e-SMART policies include (i) policies that use exact size information, e.g., SRPT and PSJF, (ii) policies that use job-size estimates, and (iii) policies that use a finite number of size-based priority levels. For many e-SMART policies, e.g., SRPT with inexact job-size information, there are no analytic results available in the literature. In this work, we prove four main results: we derive upper and lower bounds on the mean response time, the mean slowdown, the response-time tail, and the conditional response time of e-SMART policies. In each case, the results explicitly characterize the tradeoff between the accuracy of the job-size information used to prioritize and the performance of the resulting policy. Thus, the results provide designers insight into how accurate job-size information must be in order to achieve desired performance guarantees."
]
}
|
1403.5843
|
2058503096
|
Software-defined networking (SDN) is revolutionizing the networking industry, but current SDN programming platforms do not provide automated mechanisms for updating global configurations on the fly. Implementing updates by hand is challenging for SDN programmers because networks are distributed systems with hundreds or thousands of interacting nodes. Even if initial and final configurations are correct, naively updating individual nodes can lead to incorrect transient behaviors, including loops, black holes, and access control violations. This paper presents an approach for automatically synthesizing updates that are guaranteed to preserve specified properties. We formalize network updates as a distributed programming problem and develop a synthesis algorithm based on counterexample-guided search and incremental model checking. We describe a prototype implementation, and present results from experiments on real-world topologies and properties demonstrating that our tool scales to updates involving over one-thousand nodes.
|
This paper extends preliminary work reported in a workshop paper @cite_2 . We present a more precise and realistic network model, and replace expensive calls to an external model checker with calls to a new built-in incremental network model checker. We extend the DFS search procedure with optimizations and heuristics that improve performance dramatically. Finally, we evaluate our tool on a comprehensive set of benchmarks with real-world topologies. * Synthesis of concurrent programs. There is much previous work on synthesis for concurrent programs @cite_4 @cite_28 @cite_38 . In particular, work by Solar- @cite_28 and @cite_4 synthesizes sequences of instructions. However, traditional synthesis and synthesis for networking are quite different. First, traditional synthesis is a game against the environment which (in the concurrent programming case) provides inputs and schedules threads; in contrast, our synthesis problem involves reachability on the space of configurations. Second, our space of configurations is very rich, meaning that checking configurations is itself a model checking problem.
|
{
"cite_N": [
"@cite_28",
"@cite_38",
"@cite_4",
"@cite_2"
],
"mid": [
"1974514467",
"2098575846",
"2122735631",
"2040259387"
],
"abstract": [
"We describe PSketch, a program synthesizer that helps programmers implement concurrent data structures. The system is based on the concept of sketching, a form of synthesis that allows programmers to express their insight about an implementation as a partial program: a sketch. The synthesizer automatically completes the sketch to produce an implementation that matches a given correctness criteria. PSketch is based on a new counterexample-guided inductive synthesis algorithm (CEGIS) that generalizes the original sketch synthesis algorithm from Solar-Lezama et.al. to cope efficiently with concurrent programs. The new algorithm produces a correct implementation by iteratively generating candidate implementations, running them through a verifier, and if they fail, learning from the counterexample traces to produce a better candidate; converging to a solution in a handful of iterations. PSketch also extends Sketch with higher-level sketching constructs that allow the programmer to express her insight as a \"soup\" of ingredients from which complicated code fragments must be assembled. Such sketches can be viewed as syntactic descriptions of huge spaces of candidate programs (over 108 candidates for some sketches we resolved). We have used the PSketch system to implement several classes of concurrent data structures, including lock-free queues and concurrent sets with fine-grained locking. We have also sketched some other concurrent objects including a sense-reversing barrier and a protocol for the dining philosophers problem; all these sketches resolved in under an hour.",
"We describe an approach for synthesizing data representations for concurrent programs. Our compiler takes as input a program written using concurrent relations and synthesizes a representation of the relations as sets of cooperating data structures as well as the placement and acquisition of locks to synchronize concurrent access to those data structures. The resulting code is correct by construction: individual relational operations are implemented correctly and the aggregate set of operations is serializable and deadlock free. The relational specification also permits a high-level optimizer to choose the best performing of many possible legal data representations and locking strategies, which we demonstrate with an experiment autotuning a graph benchmark.",
"We present a novel framework for automatic inference of efficient synchronization in concurrent programs, a task known to be difficult and error-prone when done manually. Our framework is based on abstract interpretation and can infer synchronization for infinite state programs. Given a program, a specification, and an abstraction, we infer synchronization that avoids all (abstract) interleavings that may violate the specification, but permits as many valid interleavings as possible. Combined with abstraction refinement, our framework can be viewed as a new approach for verification where both the program and the abstraction can be modified on-the-fly during the verification process. The ability to modify the program, and not only the abstraction, allows us to remove program interleavings not only when they are known to be invalid, but also when they cannot be verified using the given abstraction. We implemented a prototype of our approach using numerical abstractions and applied it to verify several interesting programs.",
"Updates to network configurations are notoriously difficult to implement correctly. Even if the old and new configurations are correct, the update process can introduce transient errors such as forwarding loops, dropped packets, and access control violations. The key factor that makes updates difficult to implement is that networks are distributed systems with hundreds or even thousands of nodes, but updates must be rolled out one node at a time. In networks today, the task of determining a correct sequence of updates is usually done manually -- a tedious and error-prone process for network operators. This paper presents a new tool for synthesizing network updates automatically. The tool generates efficient updates that are guaranteed to respect invariants specified by the operator. It works by navigating through the (restricted) space of possible solutions, learning from counterexamples to improve scalability and optimize performance. We have implemented our tool in OCaml, and conducted experiments showing that it scales to networks with a thousand switches and tens of switches updating."
]
}
|
1403.5843
|
2058503096
|
Software-defined networking (SDN) is revolutionizing the networking industry, but current SDN programming platforms do not provide automated mechanisms for updating global configurations on the fly. Implementing updates by hand is challenging for SDN programmers because networks are distributed systems with hundreds or thousands of interacting nodes. Even if initial and final configurations are correct, naively updating individual nodes can lead to incorrect transient behaviors, including loops, black holes, and access control violations. This paper presents an approach for automatically synthesizing updates that are guaranteed to preserve specified properties. We formalize network updates as a distributed programming problem and develop a synthesis algorithm based on counterexample-guided search and incremental model checking. We describe a prototype implementation, and present results from experiments on real-world topologies and properties demonstrating that our tool scales to updates involving over one-thousand nodes.
|
* Network updates. There are many protocol- and property-specific algorithms for implementing network updates, e.g. avoiding packet bandwidth loss during planned maintenance to BGP @cite_24 @cite_5 . Other work avoids routing loops and blackholes during IGP migration @cite_20 . Work on network updates in SDN proposed the notion of consistent updates and several implementation mechanisms, including two-phase updates @cite_14 . Other work explores propagating updates incrementally, reducing the space overhead on switches @cite_37 . As mentioned in , recent work proposes ordering updates for specific properties @cite_39 , whereas we can handle combinations and variants of these properties. Furthermore, SWAN and zUpdate add support for bandwidth guarantees @cite_35 @cite_27 . @cite_0 consider customizable trace properties, and propose a dynamic algorithm to find order updates. This solution can take into account unpredictable delays caused by switch updates. However, it may not always find a solution, even if one exists. In contrast, we obtain a completeness guarantee for our static algorithm. @cite_34 consider ordering updates for waypointing properties.
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_39",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_5",
"@cite_34",
"@cite_20"
],
"mid": [
"2102090846",
"2071580597",
"2149236835",
"2137826183",
"2134024533",
"1917945289",
"2162547852",
"2124246304",
"2091307059",
"2163015897"
],
"abstract": [
"We present SWAN, a system that boosts the utilization of inter-datacenter networks by centrally controlling when and how much traffic each service sends and frequently re-configuring the network's data plane to match current traffic demand. But done simplistically, these re-configurations can also cause severe, transient congestion because different switches may apply updates at different times. We develop a novel technique that leverages a small amount of scratch capacity on links to apply updates in a provably congestion-free manner, without making any assumptions about the order and timing of updates at individual switches. Further, to scale to large networks in the face of limited forwarding table capacity, SWAN greedily selects a small set of entries that can best satisfy current demand. It updates this set without disrupting traffic by leveraging a small amount of scratch capacity in forwarding tables. Experiments using a testbed prototype and data-driven simulations of two production networks show that SWAN carries 60 more traffic than the current practice.",
"A consistent update installs a new packet-forwarding policy across the switches of a software-defined network in place of an old policy. While doing so, such an update guarantees that every packet entering the network either obeys the old policy or the new one, but not some combination of the two. In this paper, we introduce new algorithms that trade the time required to perform a consistent update against the rule-space overhead required to implement it. We break an update in to k rounds that each transfer part of the traffic to the new configuration. The more rounds used, the slower the update, but the smaller the rule-space overhead. To ensure consistency, our algorithm analyzes the dependencies between rules in the old and new policies to determine which rules to add and remove on each round. In addition, we show how to optimize rule space used by representing the minimization problem as a mixed integer linear program. Moreover, to ensure the largest flows are moved first, while using rule space efficiently, we extend the mixed integer linear program with additional constraints. Our initial experiments show that a 6-round, optimized incremental update decreases rule space overhead from 100 to less than 10 . Moreover, if we cap the maximum rule-space overhead at 5 and assume the traffic flow volume follows Zipf's law, we find that 80 of the traffic may be transferred to the new policy in the first round and 99 in the first 3 rounds.",
"Configuration changes are a common source of instability in networks, leading to outages, performance disruptions, and security vulnerabilities. Even when the initial and final configurations are correct, the update process itself often steps through intermediate configurations that exhibit incorrect behaviors. This paper introduces the notion of consistent network updates---updates that are guaranteed to preserve well-defined behaviors when transitioning mbetween configurations. We identify two distinct consistency levels, per-packet and per-flow, and we present general mechanisms for implementing them in Software-Defined Networks using switch APIs like OpenFlow. We develop a formal model of OpenFlow networks, and prove that consistent updates preserve a large class of properties. We describe our prototype implementation, including several optimizations that reduce the overhead required to perform consistent updates. We present a verification tool that leverages consistent updates to significantly reduce the complexity of checking the correctness of network control software. Finally, we describe the results of some simple experiments demonstrating the effectiveness of these optimizations on example applications.",
"We present Dionysus, a system for fast, consistent network updates in software-defined networks. Dionysus encodes as a graph the consistency-related dependencies among updates at individual switches, and it then dynamically schedules these updates based on runtime differences in the update speeds of different switches. This dynamic scheduling is the key to its speed; prior update methods are slow because they pre-determine a schedule, which does not adapt to runtime conditions. Testbed experiments and data-driven simulations show that Dionysus improves the median update speed by 53--88 in both wide area and data center networks compared to prior methods.",
"This paper presents a solution aimed at avoiding losses of connectivity when an eBGP peering link is shut down by an operator for a maintenance. Currently, shutting down an eBGP session can lead to transient losses of connectivity even though alternate path are available at the borders of the network. This is very unfortunate as ISPs face more and more stringent service level agreements, and maintenance operations are predictable operations, so that there is time to adapt to the change and preserve the respect of the service level agreement.",
"It is critical to ensure that network policy remains consistent during state transitions. However, existing techniques impose a high cost in update delay, and or FIB space. We propose the Customizable Consistency Generator (CCG), a fast and generic framework to support customizable consistency policies during network updates. CCG effectively reduces the task of synthesizing an update plan under the constraint of a given consistency policy to a verification problem, by checking whether an update can safely be installed in the network at a particular time, and greedily processing network state transitions to heuristically minimize transition delay. We show a large class of consistency policies are guaranteed by this greedy heuristic alone; in addition, CCG makes judicious use of existing heavier-weight network update mechanisms to provide guarantees when necessary. As such, CCG nearly achieves the \"best of both worlds\": the efficiency of simply passing through updates in most cases, with the consistency guarantees of more heavyweight techniques. Mininet and physical testbed evaluations demonstrate CCG's capability to achieve various types of consistency, such as path and bandwidth properties, with zero switch memory overhead and up to a 3× delay reduction compared to previous solutions.",
"Datacenter networks (DCNs) are constantly evolving due to various updates such as switch upgrades and VM migrations. Each update must be carefully planned and executed in order to avoid disrupting many of the mission-critical, interactive applications hosted in DCNs. The key challenge arises from the inherent difficulty in synchronizing the changes to many devices, which may result in unforeseen transient link load spikes or even congestions. We present one primitive, zUpdate, to perform congestion-free network updates under asynchronous switch and traffic matrix changes. We formulate the update problem using a network model and apply our model to a variety of representative update scenarios in DCNs. We develop novel techniques to handle several practical challenges in realizing zUpdate as well as implement the zUpdate prototype on OpenFlow switches and deploy it on a testbed that resembles real DCN topology. Our results, from both real-world experiments and large-scale trace-driven simulations, show that zUpdate can effectively perform congestion-free updates in production DCNs.",
"A significant fraction of network events (such as topology or route changes) and the resulting performance degradation stem from premeditated network management and operational tasks. This paper introduces a general class of Graceful Network State Migration (GNSM) problems, where the goal is to discover the optimal sequence of operations that progressively transition the network from its initial to a desired final state while minimizing the overall performance disruption. We investigate two specific GNSM problems: 1) Link Weight Reassignment Scheduling (LWRS) studies the optimal ordering of link weight updates to migrate from an existing to a new link weight assignment; and 2) Link Maintenance Scheduling (LMS) looks at how to schedule link deactivations and subsequent reactivations for maintenance purposes. LWRS and LMS are both combinatorial optimization problems. We use dynamic programming to find the optimal solutions when the problem size is small, and leverage ant colony optimization to get near-optimal solutions for large problem sizes. Our simulation study reveals that judiciously ordering network operations can achieve significant performance gains. Our GNSM solution framework is generic and applies to similar problems with different operational contexts, underlying network protocols or mechanisms, and performance metrics.",
"Networks are critical for the security of many computer systems. However, their complex and asynchronous nature often renders it difficult to formally reason about network behavior. Accordingly, it is challenging to provide correctness guarantees, especially during network updates. This paper studies how to update networks while maintaining a most basic safety property, Waypoint Enforcement (WPE): each packet is required to traverse a certain checkpoint (for instance, a firewall). Waypoint enforcement is particularly relevant in today's increasingly virtualized and software-defined networks, where new in-network functionality is introduced flexibly. We show that WPE can easily be violated during network updates, even though both the old and the new policy ensure WPE. We then present an algorithm WayUp that guarantees WPE at any time, while completing updates quickly. We also find that in contrast to other transient consistency properties, WPE cannot always be implemented in a wait-free manner, and that WPE may even conflict with Loop-Freedom (LF). Finally, we present an optimal policy update algorithm OptRounds, which requires a minimum number of communication rounds while ensuring both WPE and LF, whenever this is possible.",
"Network-wide migrations of a running network, such as the replacement of a routing protocol or the modification of its configuration, can improve the performance, scalability, manageability, and security of the entire network. However, such migrations are an important source of concerns for network operators as the reconfiguration campaign can lead to long and service-affecting outages. In this paper, we propose a methodology which addresses the problem of seamlessly modifying the configuration of commonly used link-state Interior Gateway Protocols (IGP). We illustrate the benefits of our methodology by considering several migration scenarios, including the addition or the removal of routing hierarchy in an existing IGP and the replacement of one IGP with another. We prove that a strict operational ordering can guarantee that the migration will not create IP transit service outages. Although finding a safe ordering is NP complete, we describe techniques which efficiently find such an ordering and evaluate them using both real-world and inferred ISP topologies. Finally, we describe the implementation of a provisioning system which automatically performs the migration by pushing the configurations on the routers in the appropriate order, while monitoring the entire migration process."
]
}
|
1403.5843
|
2058503096
|
Software-defined networking (SDN) is revolutionizing the networking industry, but current SDN programming platforms do not provide automated mechanisms for updating global configurations on the fly. Implementing updates by hand is challenging for SDN programmers because networks are distributed systems with hundreds or thousands of interacting nodes. Even if initial and final configurations are correct, naively updating individual nodes can lead to incorrect transient behaviors, including loops, black holes, and access control violations. This paper presents an approach for automatically synthesizing updates that are guaranteed to preserve specified properties. We formalize network updates as a distributed programming problem and develop a synthesis algorithm based on counterexample-guided search and incremental model checking. We describe a prototype implementation, and present results from experiments on real-world topologies and properties demonstrating that our tool scales to updates involving over one-thousand nodes.
|
* Model checking. Model checking has been used for network verification @cite_29 @cite_13 @cite_11 @cite_15 @cite_32 . The closest to our work is the incremental checker NetPlumber @cite_3 . Surface-level differences include the specification languages (LTL vs. regular expressions), and NetPlumber's lack of counterexample output. The main difference is incrementality: Netplumber restricts checking to probe nodes," keeping track of header-space" reachability information for those nodes, and then performing property queries based on this. In contrast, we look at the property , keeping track of portions of the property holding at each node, which keeps incremental rechecking times low. The empirical comparison () showed better performance of our tool as a back-end for synthesis. Incremental model checking has been studied previously, with @cite_9 presenting the first incremental model checking algorithm, for alternation-free @math -calculus. We consider LTL properties and specialize our algorithm to exploit the no-forwarding-loops assumption. The paper @cite_22 introduced an incremental algorithm, but it is specific to the type of partial results produced by IC3 @cite_1 .
|
{
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"1563456403",
"1972121713",
"1553574558",
"1549166962",
"2051714560",
"158224344",
"2122695394",
"2115526539",
"1882012874"
],
"abstract": [
"Formal verification is a reliable and fully automatic technique for proving correctness of hardware designs. Its main drawback is the high complexity of verification, and this problem is especially acute in regression verification, where a new version of the design, differing from the previous version very slightly, is verified with respect to the same or a very similar property. In this paper, we present an efficient algorithm for incremental verification, based on the ic3 algorithm, that uses stored information from the previous verification runs in order to improve the complexity of re-verifying similar designs on similar properties. Our algorithm applies both to the positive and to the negative results of verification (that is, both when there is a proof of correctness and when there is a counterexample). The algorithm is implemented and experimental results show improvement of up to two orders of magnitude in running time, compared to full verification.",
"It is difficult to build a real network to test novel experiments. OpenFlow makes it easier for researchers to run their own experiments by providing a virtual slice and configuration on real networks. Multiple users can share the same network by assigning a different slice for each one. Users are given the responsibility to maintain and use their own slice by writing rules in a FlowTable. Misconfiguration problems can arise when a user writes conflicting rules for single FlowTable or even within a path of multiple OpenFlow switches that need multiple FlowTables to be maintained at the same time. In this work, we describe a tool, FlowChecker, to identify any intra-switch misconfiguration within a single FlowTable. We also describe the inter-switch or inter-federated inconsistencies in a path of OpenFlow switches across the same or different OpenFlow infrastructures. FlowChecker encodes FlowTables configuration using Binary Decision Diagrams and then uses the model checker technique to model the inter-connected network of OpenFlow switches.",
"We present an incremental algorithm for model checking in the alternation-free fragment of the modal mu-calculus, the first incremental algorithm for model checking of which we are aware. The basis for our algorithm, which we call MCI (for Model Checking Incrementally), is a linear-time algorithm due to Cleaveland and Steffen that performs global (non-incremental) computation of fixed points. MCI takes as input a set δ of change) to the labeled transition system under investigation, where a change constitutes an inserted or deleted transition; with virtually no additional cost, inserted and deleted states can also be accommodated. Like the Cleaveland-Steffen algorithm, MCI requires time linear in the size of the LTS in the worst case, but only time linear in δ in the best case. We give several examples to illustrate MCI in action, and discuss its implementation in the Concurrency Factory, an interactive design environment for concurrent systems.",
"A new form of SAT-based symbolic model checking is described. Instead of unrolling the transition relation, it incrementally generates clauses that are inductive relative to (and augment) stepwise approximate reachability information. In this way, the algorithm gradually refines the property, eventually producing either an inductive strengthening of the property or a counterexample trace. Our experimental studies show that induction is a powerful tool for generalizing the unreachability of given error states: it can refine away many states at once, and it is effective at focusing the proof search on aspects of the transition system relevant to the property. Furthermore, the incremental structure of the algorithm lends itself to a parallel implementation.",
"In software-defined networking (SDN), a software controller manages a distributed collection of switches by installing and uninstalling packet-forwarding rules in the switches. SDNs allow flexible implementations for expressive and sophisticated network management policies. We consider the problem of verifying that an SDN satisfies a given safety property. We describe Kuai, a distributed enumerative model checker for SDNs. Kuai takes as input a controller implementation written in Murphi, a description of the network topology (switches and connections), and a safety property, and performs a distributed enumerative reachability analysis on a cluster of machines. Kuai uses a set of partial order reduction techniques specific to the SDN domain that help reduce the state space dramatically. In addition, Kuai performs an automatic abstraction to handle unboundedly many packets traversing the network at a given time and unboundedly many control messages between the controller and the switches. We demonstrate the scalability and coverage of Kuai on standard SDN benchmarks. We show that our set of partial order reduction techniques significantly reduces the state spaces of these benchmarks by many orders of magnitude. In addition, Kuai exploits large-scale distribution to quickly search the reduced state space.",
"Network state may change rapidly in response to customer demands, load conditions or configuration changes. But the network must also ensure correctness conditions such as isolating tenants from each other and from critical services. Existing policy checkers cannot verify compliance in real time because of the need to collect \"state\" from the entire network and the time it takes to analyze this state. SDNs provide an opportunity in this respect as they provide a logically centralized view from which every proposed change can be checked for compliance with policy. But there remains the need for a fast compliance checker. Our paper introduces a real time policy checking tool called NetPlumber based on Header Space Analysis (HSA) [8]. Unlike HSA, however, NetPlumber incrementally checks for compliance of state changes, using a novel set of conceptual tools that maintain a dependency graph between rules. While NetPlumber is a natural fit for SDNs, its abstract intermediate form is conceptually applicable to conventional networks as well. We have tested NetPlumber on Google's SDN, the Stanford backbone and Internet 2. With NetPlumber, checking the compliance of a typical rule update against a single policy on these networks takes 50-500µs on average.",
"Networks are complex and prone to bugs. Existing tools that check configuration files and data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise. Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a preliminary design, VeriFlow, which suggests that this goal is achievable. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted. Based on an implementation using a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion.",
"Diagnosing problems in networks is a time-consuming and error-prone process. Existing tools to assist operators primarily focus on analyzing control plane configuration. Configuration analysis is limited in that it cannot find bugs in router software, and is harder to generalize across protocols since it must model complex configuration languages and dynamic protocol behavior. This paper studies an alternate approach: diagnosing problems through static analysis of the data plane. This approach can catch bugs that are invisible at the level of configuration files, and simplifies unified analysis of a network across many protocols and implementations. We present Anteater, a tool for checking invariants in the data plane. Anteater translates high-level network invariants into boolean satisfiability problems (SAT), checks them against network state using a SAT solver, and reports counterexamples if violations have been found. Applied to a large university network, Anteater revealed 23 bugs, including forwarding loops and stale ACL rules, with only five false positives. Nine of these faults are being fixed by campus network operators.",
"Today's networks typically carry or deploy dozens of protocols and mechanisms simultaneously such as MPLS, NAT, ACLs and route redistribution. Even when individual protocols function correctly, failures can arise from the complex interactions of their aggregate, requiring network administrators to be masters of detail. Our goal is to automatically find an important class of failures, regardless of the protocols running, for both operational and experimental networks. To this end we developed a general and protocol-agnostic framework, called Header Space Analysis (HSA). Our formalism allows us to statically check network specifications and configurations to identify an important class of failures such as Reachability Failures, Forwarding Loops and Traffic Isolation and Leakage problems. In HSA, protocol header fields are not first class entities; instead we look at the entire packet header as a concatenation of bits without any associated meaning. Each packet is a point in the 0,1 L space where L is the maximum length of a packet header, and networking boxes transform packets from one point in the space to another point or set of points (multicast). We created a library of tools, called Hassel, to implement our framework, and used it to analyze a variety of networks and protocols. Hassel was used to analyze the Stanford University backbone network, and found all the forwarding loops in less than 10 minutes, and verified reachability constraints between two subnets in 13 seconds. It also found a large and complex loop in an experimental loose source routing protocol in 4 minutes."
]
}
|
1403.5199
|
1550062339
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
|
The work @cite_1 by Zhang and Mendelzon introduced and solved the problem of conditional containment'' between two CQ queries in presence of materialized CQ views, under CWA and in the absence of dependencies. @cite_1 also introduced a security-relevant complexity metric, under which their problem is @math complete. ( @cite_1 also provides an excellent overview of the connections of the query-containment problem of @cite_1 to database-theory literature.) In our work, we add dependencies to the formulation of the problem of @cite_1 , and extend the approach of @cite_1 , both to solve the resulting problem in the CQ weakly acyclic setting and to analyze the complexity of the problem. We also uncover a tight relationship of the problem with the problems of finding and determining certain query answers, under CWA in presence of view materializations and of dependencies.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1508622259"
],
"abstract": [
"A recent proposal for database access control consists of defining “authorization views” that specify the accessible data, and declaring a query valid if it can be completely rewritten using the views. Unlike traditional work in query rewriting using views, the rewritten query needs to be equivalent to the original query only over the set of database states that agree with a given set of materializations for the authorization views. With this motivation, we study conditional query containment, i.e. , containment over states that agree on a set of materialized views. We give an algorithm to test conditional containment of conjunctive queries with respect to a set of materialized conjunctive views. We show the problem is @math -complete. Based on the algorithm, we give a test for a query to be conditionally authorized given a set of materialized authorization views."
]
}
|
1403.5199
|
1550062339
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
|
CWA & dependencies: problem of finding all certain query answers: this problem was introduced (with dependencies) under OWA in @cite_24 @cite_25 this problem was solved for CWA OWA dependency free in @cite_9 (full total dependencies @cite_12 suggested as extension)
|
{
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_25",
"@cite_12"
],
"mid": [
"1526397503",
"2160983447",
"2152318637",
"1569370144"
],
"abstract": [
"In relational database systems a combination of privileges and views is employed to limit a user's access and to hide non-public data. The data privacy problem is to decide whether the views leak information about the underlying database instance. Or, to put it more formally, the question is whether there are certain answers of a database query with respect to the given view instance. In order to answer the problem of provable date privacy, we will make use of query answering techniques for data exchange. We also investigate the impact of database dependencies on the privacy problem. An example about health care statistics in Switzerland shows that we also have to consider dependencies which are inherent in the semantics of the data.",
"We study the complexity of the problem of answering queries using materialized views. This problem has attracted a lot of attention recently because of its relevance in data integration. Previous work considered only conjunctive view definitions. We examine the consequences of allowing more expressive view definition languages. The languages we consider for view definitions and user queries are: conjunctive queries with inequality, positive queries, datalog, and first-order logic. We show that the complexity of the problem depends on whether views are assumed to store all the tuples that satisfy the view definition, or only a subset of it. Finally, we apply the results to the view consistency and view self-maintainability problems which arise in data warehousing.",
"Existing work on inference detection for database systems mainly employ functional dependencies in the database schema to detect inferences. It has been noticed that analyzing the data stored in the database may help to detect more inferences. We describe our effort in developing a data level inference detection system. We have identified five inference rules that a user can use to perform inferences. They are \"subsume\", \"unique characteristic\", \"overlapping\", \"complementary\", and \"functional dependency\" inference rules. The existence of these inference rules confirms the inadequacy of detecting inferences using just functional dependencies. The rules can be applied any number of times and in any order. These inference rules are sound. They are not necessarily complete, although we have no example that demonstrates incompleteness. We employ a rule based approach so that future inference rules can be incorporated into the detection system. We have developed a prototype of the inference detection system using Perl on a Sun SPARC 20 workstation. The preliminary results show that on average it takes seconds to process a query for a database with thousands of records. Thus, our approach to inference detection is best performed offline, and would be most useful to detect subtle inference attacks.",
"In this paper we study the implication and the finite implication problems for data dependencies. When all dependencies are total the problems are equivalent and solvable but are NP-hard, i.e., probably computationally intractable. For non-total dependencies the implication problem is unsolvable, and the finite implication problem is not even partially solvable. Thus, there can be no formal system for finite implication. The meta decision problems of deciding for a given class of dependencies whether the implication problem is solvable or whether implication is equivalent to finite implication are also unsolvable."
]
}
|
1403.5199
|
1550062339
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
|
The Brodsky paper @cite_25 (NB! it actually precedes [was published in the year 2000] the paper @cite_24 , which was published in 2005):
|
{
"cite_N": [
"@cite_24",
"@cite_25"
],
"mid": [
"1526397503",
"2152318637"
],
"abstract": [
"In relational database systems a combination of privileges and views is employed to limit a user's access and to hide non-public data. The data privacy problem is to decide whether the views leak information about the underlying database instance. Or, to put it more formally, the question is whether there are certain answers of a database query with respect to the given view instance. In order to answer the problem of provable date privacy, we will make use of query answering techniques for data exchange. We also investigate the impact of database dependencies on the privacy problem. An example about health care statistics in Switzerland shows that we also have to consider dependencies which are inherent in the semantics of the data.",
"Existing work on inference detection for database systems mainly employ functional dependencies in the database schema to detect inferences. It has been noticed that analyzing the data stored in the database may help to detect more inferences. We describe our effort in developing a data level inference detection system. We have identified five inference rules that a user can use to perform inferences. They are \"subsume\", \"unique characteristic\", \"overlapping\", \"complementary\", and \"functional dependency\" inference rules. The existence of these inference rules confirms the inadequacy of detecting inferences using just functional dependencies. The rules can be applied any number of times and in any order. These inference rules are sound. They are not necessarily complete, although we have no example that demonstrates incompleteness. We employ a rule based approach so that future inference rules can be incorporated into the detection system. We have developed a prototype of the inference detection system using Perl on a Sun SPARC 20 workstation. The preliminary results show that on average it takes seconds to process a query for a database with thousands of records. Thus, our approach to inference detection is best performed offline, and would be most useful to detect subtle inference attacks."
]
}
|
1403.5199
|
1550062339
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
|
@cite_25 also mentions a problem statement where the input does not supply a specific instance @math in their paper @cite_25 , in the case where @math is given in the problem input, they consider (i) single-relation (i.e., no joins are involved) CQ queries views, (ii) Horn-clause constraints (tgds and egds), with constants possible both on the LHS and on the RHS, and (iii) under OWA.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2152318637"
],
"abstract": [
"Existing work on inference detection for database systems mainly employ functional dependencies in the database schema to detect inferences. It has been noticed that analyzing the data stored in the database may help to detect more inferences. We describe our effort in developing a data level inference detection system. We have identified five inference rules that a user can use to perform inferences. They are \"subsume\", \"unique characteristic\", \"overlapping\", \"complementary\", and \"functional dependency\" inference rules. The existence of these inference rules confirms the inadequacy of detecting inferences using just functional dependencies. The rules can be applied any number of times and in any order. These inference rules are sound. They are not necessarily complete, although we have no example that demonstrates incompleteness. We employ a rule based approach so that future inference rules can be incorporated into the detection system. We have developed a prototype of the inference detection system using Perl on a Sun SPARC 20 workstation. The preliminary results show that on average it takes seconds to process a query for a database with thousands of records. Thus, our approach to inference detection is best performed offline, and would be most useful to detect subtle inference attacks."
]
}
|
1403.5199
|
1550062339
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
|
As observed in , to the best of our knowledge, the formal problem that we address in this current paper has not been considered in the open literature. (Abiteboul and Duschka in @cite_9 consider the special case where the set of dependencies is empty, apply in their analysis a different type of complexity metric than we do in this paper, and do not provide algorithms alongside their complexity results.) The work @cite_26 addresses a problem that is similar to ours at the informal level; see for a detailed discussion of @cite_26 . Generally, the literature on privacy-preserving query answering and data publishing is represented by work on data anonymization and on differential privacy; @cite_3 is a recent survey. Most of that work focuses on probabilistic inference of private information, while in this paper we focus on the possibilistic situation, where an adversary can deterministically derive sensitive information. Further, our model of sensitive information goes beyond associations between individuals and their private sensitive attributes.
|
{
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_3"
],
"mid": [
"2160983447",
"2087154854",
""
],
"abstract": [
"We study the complexity of the problem of answering queries using materialized views. This problem has attracted a lot of attention recently because of its relevance in data integration. Previous work considered only conjunctive view definitions. We examine the consequences of allowing more expressive view definition languages. The languages we consider for view definitions and user queries are: conjunctive queries with inequality, positive queries, datalog, and first-order logic. We show that the complexity of the problem depends on whether views are assumed to store all the tuples that satisfy the view definition, or only a subset of it. Finally, we apply the results to the view consistency and view self-maintainability problems which arise in data warehousing.",
"We perform a theoretical study of the following query-view security problem: given a view V to be published, does V logically disclose information about a confidential query S? The problem is motivated by the need to manage the risk of unintended information disclosure in today's world of universal data exchange. We present a novel information-theoretic standard for query-view security. This criterion can be used to provide a precise analysis of information disclosure for a host of data exchange scenarios, including multi-party collusion and the use of outside knowledge by an adversary trying to learn privileged facts about the database. We prove a number of theoretical results for deciding security according to this standard. We also generalize our security criterion to account for prior knowledge a user or adversary may possess, and introduce techniques for measuring the magnitude of partial disclosures. We believe these results can be a foundation for practical efforts to secure data exchange frameworks, and also illuminate a nice interaction between logic and probability theory.",
""
]
}
|
1403.5199
|
1550062339
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
|
Zhang and Mendelzon in @cite_1 addressed the problem of letting users access authorized data, via rewriting the users' queries in terms of their authorization views; this problem is different from ours. Toward that goal, @cite_1 explored the notion of conditional query containment,'' which in this current paper we extend to the case @math @math @math . The results of @cite_1 , which we use in our work, include a powerful reduction of the problem of testing conditional containment of CQ queries to that of testing unconditional containment of modifications of the queries.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1508622259"
],
"abstract": [
"A recent proposal for database access control consists of defining “authorization views” that specify the accessible data, and declaring a query valid if it can be completely rewritten using the views. Unlike traditional work in query rewriting using views, the rewritten query needs to be equivalent to the original query only over the set of database states that agree with a given set of materializations for the authorization views. With this motivation, we study conditional query containment, i.e. , containment over states that agree on a set of materialized views. We give an algorithm to test conditional containment of conjunctive queries with respect to a set of materialized conjunctive views. We show the problem is @math -complete. Based on the algorithm, we give a test for a query to be conditionally authorized given a set of materialized authorization views."
]
}
|
1403.5199
|
1550062339
|
We consider the problems of finding and determining certain query answers and of determining containment between queries; each problem is formulated in presence of materialized views and dependencies under the closed-world assumption. We show a tight relationship between the problems in this setting. Further, we introduce algorithms for solving each problem for those inputs where all the queries and views are conjunctive, and the dependencies are embedded weakly acyclic. We also determine the complexity of each problem under the security-relevant complexity measure introduced by Zhang and Mendelzon in 2005. The problems studied in this paper are fundamental in ensuring correct specification of database access-control policies, in particular in case of fine-grained access control. Our approaches can also be applied in the areas of inference control, secure data publishing, and database auditing.
|
Our results of Sections -- build on the influential framework for data exchange @cite_2 by Fagin and colleagues. Our @math -induced dependencies of resemble target-to-source dependencies @math introduced into (peer) data exchange in @cite_35 . The difference is that @math are embedded dependencies defined at the schema level. In contrast, our disjunctive @math -induced dependencies embody the given set of view answers @math .
|
{
"cite_N": [
"@cite_35",
"@cite_2"
],
"mid": [
"2107115563",
"2102729564"
],
"abstract": [
"In this article, we introduce and study a framework, called peer data exchange, for sharing and exchanging data between peers. This framework is a special case of a full-fledged peer data management system and a generalization of data exchange between a source schema and a target schema. The motivation behind peer data exchange is to model authority relationships between peers, where a source peer may contribute data to a target peer, specified using source-to-target constraints, and a target peer may use target-to-source constraints to restrict the data it is willing to receive, but cannot modify the data of the source peer.A fundamental algorithmic problem in this framework is that of deciding the existence of a solution: given a source instance and a target instance for a fixed peer data exchange setting, can the target instance be augmented in such a way that the source instance and the augmented target instance satisfy all constraints of the settingq We investigate the computational complexity of the problem for peer data exchange settings in which the constraints are given by tuple generating dependencies. We show that this problem is always in NP, and that it can be NP-complete even for “acyclic” peer data exchange settings. We also show that the data complexity of the certain answers of target conjunctive queries is in coNP, and that it can be coNP-complete even for “acyclic” peer data exchange settings.After this, we explore the boundary between tractability and intractability for deciding the existence of a solution and for computing the certain answers of target conjunctive queries. To this effect, we identify broad syntactic conditions on the constraints between the peers under which the existence-of-solutions problem is solvable in polynomial time. We also identify syntactic conditions between peer data exchange settings and target conjunctive queries that yield polynomial-time algorithms for computing the certain answers. For both problems, these syntactic conditions turn out to be tight, in the sense that minimal relaxations of them lead to intractability. Finally, we introduce the concept of a universal basis of solutions in peer data exchange and explore its properties.",
"Data exchange is the problem of taking data structured under a source schema and creating an instance of a target schema that reflects the source data as accurately as possible. In this paper, we address foundational and algorithmic issues related to the semantics of data exchange and to the query answering problem in the context of data exchange. These issues arise because, given a source instance, there may be many target instances that satisfy the constraints of the data exchange problem.We give an algebraic specification that selects, among all solutions to the data exchange problem, a special class of solutions that we call universal. We show that a universal solution has no more and no less data than required for data exchange and that it represents the entire space of possible solutions. We then identify fairly general, yet practical, conditions that guarantee the existence of a universal solution and yield algorithms to compute a canonical universal solution efficiently. We adopt the notion of the \"certain answers\" in indefinite databases for the semantics for query answering in data exchange. We investigate the computational complexity of computing the certain answers in this context and also address other algorithmic issues that arise in data exchange. In particular, we study the problem of computing the certain answers of target queries by simply evaluating them on a canonical universal solution, and we explore the boundary of what queries can and cannot be answered this way, in a data exchange setting."
]
}
|
1403.5287
|
2951392024
|
In many online learning problems we are interested in predicting local information about some universe of items. For example, we may want to know whether two items are in the same cluster rather than computing an assignment of items to clusters; we may want to know which of two teams will win a game rather than computing a ranking of teams. Although finding the optimal clustering or ranking is typically intractable, it may be possible to predict the relationships between items as well as if you could solve the global optimization problem exactly. Formally, we consider an online learning problem in which a learner repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial payoff depending on those labels. The learner's goal is to receive a payoff nearly as good as the best fixed labeling of the items. We show that a simple algorithm based on semidefinite programming can obtain asymptotically optimal regret in the case where the number of possible labels is O(1), resolving an open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical contribution is a novel use and analysis of the log determinant regularizer, exploiting the observation that log det(A + I) upper bounds the entropy of any distribution with covariance matrix A.
|
The recent results of Hazan, Kale, and Shalev-Schwartz @cite_7 in particular are closely related to our own. They consider the setting in which learners produce outputs in @math and compete with the class of matrices which can be decomposed as a difference of positive semidefinite matrices with small entries and small trace. Their framework is equivalent to ours in the case of 2-local prediction for @math . In that setting they obtain a regret bound of @math , which differs from our bound by a @math factor. This is the difference between a convergence time of @math and @math , which may be quite significant in some settings. For example, this might be the difference between a single user needing to wait @math time before receiving good recommendations and needing to wait @math time.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2169401877"
],
"abstract": [
"In an online decision problem, one makes a sequence of decisions without knowledge of the future. Each period, one pays a cost based on the decision and observed state. We give a simple approach for doing nearly as well as the best single decision, where the best is chosen with the benefit of hindsight. A natural idea is to follow the leader, i.e. each period choose the decision which has done best so far. We show that by slightly perturbing the totals and then choosing the best decision, the expected performance is nearly as good as the best decision in hindsight. Our approach, which is very much like Hannan's original game-theoretic approach from the 1950s, yields guarantees competitive with the more modern exponential weighting algorithms like Weighted Majority. More importantly, these follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient."
]
}
|
1403.5287
|
2951392024
|
In many online learning problems we are interested in predicting local information about some universe of items. For example, we may want to know whether two items are in the same cluster rather than computing an assignment of items to clusters; we may want to know which of two teams will win a game rather than computing a ranking of teams. Although finding the optimal clustering or ranking is typically intractable, it may be possible to predict the relationships between items as well as if you could solve the global optimization problem exactly. Formally, we consider an online learning problem in which a learner repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial payoff depending on those labels. The learner's goal is to receive a payoff nearly as good as the best fixed labeling of the items. We show that a simple algorithm based on semidefinite programming can obtain asymptotically optimal regret in the case where the number of possible labels is O(1), resolving an open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical contribution is a novel use and analysis of the log determinant regularizer, exploiting the observation that log det(A + I) upper bounds the entropy of any distribution with covariance matrix A.
|
Our techniques differ from those of @cite_7 primarily by our choice of regularizer. Like us, they work with a semidefinite relaxation for the space of labelings (though they do not describe their approach in these terms) and solve an appropriately regularized problem. Their regularizer is the von Neumann entropy, and this leads to their regret bound of @math . We are able to improve this bound by using the log determinant. Moreover, because the log determinant is a natural analog of entropy in the setting of semidefinite programming relaxations for constraint satisfaction, we are able to give a conceptually simple analysis.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2169401877"
],
"abstract": [
"In an online decision problem, one makes a sequence of decisions without knowledge of the future. Each period, one pays a cost based on the decision and observed state. We give a simple approach for doing nearly as well as the best single decision, where the best is chosen with the benefit of hindsight. A natural idea is to follow the leader, i.e. each period choose the decision which has done best so far. We show that by slightly perturbing the totals and then choosing the best decision, the expected performance is nearly as good as the best decision in hindsight. Our approach, which is very much like Hannan's original game-theoretic approach from the 1950s, yields guarantees competitive with the more modern exponential weighting algorithms like Weighted Majority. More importantly, these follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient."
]
}
|
1403.5007
|
2949123073
|
Recent literature including our past work provide analysis and solutions for using (i) erasure coding, (ii) parallelism, or (iii) variable slicing chunking (i.e., dividing an object of a specific size into a variable number of smaller chunks) in speeding the I O performance of storage clouds. However, a comprehensive approach that considers all three dimensions together to achieve the best throughput-delay trade-off curve had been lacking. This paper presents the first set of solutions that can pick the best combination of coding rate and object chunking slicing options as the load dynamically changes. Our specific contributions are as follows: (1) We establish via measurement that combining variable coding rate and chunking is mostly feasible over a popular public cloud. (2) We relate the delay optimal values for chunking level and code rate to the queue backlogs via an approximate queueing analysis. (3) Based on this analysis, we propose TOFEC that adapts the chunking level and coding rate against the queue backlogs. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal throughput-delay trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers @math lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over @math as many requests. (4) We propose a simpler greedy solution that performs on a par with TOFEC in average delay performance, but exhibits significantly more performance variations.
|
FEC in connection with multiple paths and or multiple servers is a well investigated topic in the literature @cite_14 @cite_2 @cite_12 @cite_3 . However, there is very little attention devoted to the queueing delays. FEC in the context of network coding or coded scheduling has also been a popular topic from the perspectives of throughput (or network utility) maximization and throughput vs. service delay trade-offs @cite_1 @cite_18 @cite_13 @cite_5 . Although some incorporate queuing delay analysis, the treatment is largely for broadcast wireless channels with quite different system characteristics and constraints. FEC has also been extensively studied in the context of distributed storage from the points of high durability and availability while attaining high storage efficiency @cite_19 @cite_0 @cite_9 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2164817605",
"2110555585",
"2122883255",
"2065185800",
"133843592",
"2105185344",
"",
"",
"",
"2056386972"
],
"abstract": [
"",
"In this paper, we propose a novel transport protocol that effectively utilizes available bandwidth and diversity gains provided by heterogeneous, highly lossy paths. Our Multi-Path LOss-Tolerant (MPLOT) protocol can be used to provide significant gains in the goodput of wireless mesh networks, subject to bursty, correlated losses with average loss-rates as high as 50 , and random outage events. MPLOT makes intelligent use of erasure codes to guard against packets losses, and a Hybrid-ARQ FEC scheme to reduce packet recovery latency, where the redundancy is adaptively provisioned into both proactive and reactive FECs. MPLOT uses dynamic packet mapping based on current path characteristics, and does not require packets to be delivered in sequence to ensure reliability. We present a theoretical analysis of the different design choices of MPLOT and show that MPLOT makes an optimal trade-off between goodput and delay constraints. We test MPLOT, through simulations, under a variety of test scenarios and show that it effectively exploits path diversity in addition to aggregating path bandwidths. We also show that MPLOT is fair to single-path protocols like TCP-SACK.",
"Distributed storage systems provide large-scale reliable data storage by storing a certain degree of redundancy in a decentralized fashion on a group of storage nodes. To recover from data losses due to the instability of these nodes, whenever a node leaves the system, additional redundancy should be regenerated to compensate such losses. In this context, the general objective is to minimize the volume of actual network traffic caused by such regenerations. A class of codes, called regenerating codes, has been proposed to achieve an optimal trade-off curve between the amount of storage space required for storing redundancy and the network traffic during the regeneration. In this paper, we jointly consider the choices of regenerating codes and network topologies. We propose a new design, referred to as RCTREE, that combines the advantage of regenerating codes with a tree-structured regeneration topology. Our focus is the efficient utilization of network links, in addition to the reduction of the regeneration traffic. With the extensive analysis and quantitative evaluations, we show that RCTREE is able to achieve a both fast and stable regeneration, even with departures of storage nodes during the regeneration.",
"In this paper, we concentrate on opportunistic scheduling for multicast information. We pose the problem as a multicast throughput optimization problem. As a solution we present how one can jointly utilize fixed-rate and rateless erasure coding along with simple rate adaptation techniques in order to achieve the optimal multicast throughput per user. We first investigate the performance of the proposed system under i.i.d. channel conditions. Our analysis shows a linear gain for the multicast capacity over i.i.d. Rayleigh fading channels with respect to the number of users. Since the established results require coding over large number of blocks and hence induce large decoding delays, we extend our analysis to the cases where we code over shorter block lengths and thus quantify the delay-capacity tradeoffs under a simple setting. We further look into non-i.i.d. channel conditions and show achievable gains by modifying a scheduling heuristic whose fairness is well- established for opportunistic scheduling of unicast flows. Our overall evaluations demonstrate that under both i.i.d. and non-i.i.d. channel conditions, opportunistic multicasting with erasure coding can significantly improve the performance over the traditional techniques used in today's communication systems.",
"BitTorrent is probably the most famous file-sharing protocol used in the Internet currently. It represents more than half of the P2P traffic. Various applications are using BitTorrent-like protocols to deliver the resource and implement techniques to perform a reliable data transmission. Forward Error Correction (FEC) is an efficient mechanism used for this goal. This paper proposes a performance evaluation of FEC implemented on BitTorrent protocol. A simulation framework has been developed to evaluate the improvement depending on many factors like the leeches seeds number and capacities, the network nature (homogeneous or heterogeneous), the resource size, and the FEC redundancy ratio. The completion time metric shows that FEC is a method that accelerates the data access in some specific network configurations. On the contrary, this technique can also disrupt the system in some cases since it introduces an overhead.",
"",
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.",
"",
"",
"",
"Mirror sites enable client requests to be serviced by any of a number of servers, reducing load at individual servers and dispersing network load. Typically, a client requests service from a single mirror site. We consider enabling a client to access a file from multiple mirror sites in parallel to speed up the download. To eliminate complex client-server negotiations that a straightforward implementation of this approach would require, we develop a feedback-free protocol based on erasure codes. We demonstrate that a protocol using fast Tornado codes can deliver dramatic speedups at the expense of transmitting a moderate number of additional packets into the network. This scalable solution extends naturally to allow multiple clients to access data from multiple mirror sites simultaneously. The approach applies naturally to wireless networks and satellite networks as well."
]
}
|
1403.5206
|
2950170811
|
Tumblr, as one of the most popular microblogging platforms, has gained momentum recently. It is reported to have 166.4 millions of users and 73.4 billions of posts by January 2014. While many articles about Tumblr have been published in major press, there is not much scholar work so far. In this paper, we provide some pioneer analysis on Tumblr from a variety of aspects. We study the social network structure among Tumblr users, analyze its user generated content, and describe reblogging patterns to analyze its user behavior. We aim to provide a comprehensive statistical overview of Tumblr and compare it with other popular social services, including blogosphere, Twitter and Facebook, in answering a couple of key questions: What is Tumblr? How is Tumblr different from other social media networks? In short, we find Tumblr has more rich content than other microblogging platforms, and it contains hybrid characteristics of social networking, traditional blogosphere, and social media. This work serves as an early snapshot of Tumblr that later work can leverage.
|
There are rich literatures on both existing and emerging online social network services. Statistical patterns across different types of social networks are reported, including traditional blogosphere @cite_17 , user-generated content platforms like Flickr, Youtube and LiveJournal @cite_13 , Twitter @cite_1 @cite_18 , instant messenger network @cite_19 , Facebook @cite_7 , and Pinterest @cite_23 @cite_3 . Majority of them observe shared patterns such as long tail distribution for user degrees (power law or power law with exponential cut-off), small ($90 effect in terms of user profiles (age or location), but not with respect to gender. Indeed, people are more likely to talk to the opposite sex @cite_19 . The recent study of Pinterest observed that ladies tend to be more active and engaged than men @cite_3 , and women and men have different interests @cite_16 . We have compared Tumblr's patterns with other social networks in Table and observed that most of those trend hold in Tumblr except for some number difference.
|
{
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2101196063",
"1893161742",
"2046804949",
"2277141067",
"2157579446",
"2094736127",
"2129165073",
"2115022330",
""
],
"abstract": [
"Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it.",
"We study the structure of the social graph of active Facebook users, the largest social network ever analyzed. We compute numerous features of the graph including the number of users and friendships, the degree distribution, path lengths, clustering, and mixing patterns. Our results center around three main observations. First, we characterize the global structure of the graph, determining that the social network is nearly fully connected, with 99.91 of individuals belonging to a single large connected component, and we confirm the \"six degrees of separation\" phenomenon on a global scale. Second, by studying the average local clustering coefficient and degeneracy of graph neighborhoods, we show that while the Facebook graph as a whole is clearly sparse, the graph neighborhoods of users contain surprisingly dense structure. Third, we characterize the assortativity patterns present in the graph by studying the basic demographic and network properties of users. We observe clear degree assortativity and characterize the extent to which \"your friends have more friends than you\". Furthermore, we observe a strong effect of age on friendship preferences as well as a globally modular community structure driven by nationality, but we do not find any strong gender homophily. We compare our results with those from smaller social networks and find mostly, but not entirely, agreement on common structural network characteristics.",
"Microblogging is a new form of communication in which users can describe their current status in short posts distributed by instant messages, mobile phones, email or the Web. Twitter, a popular microblogging tool has seen a lot of growth since it launched in October, 2006. In this paper, we present our observations of the microblogging phenomena by studying the topological and geographical properties of Twitter's social network. We find that people use microblogging to talk about their daily activities and to seek or share information. Finally, we analyze the user intentions associated at a community level and show how users with similar intentions connect with each other.",
"Online social networks (OSNs) have become popular platforms for people to connect and interact with each other. Among those networks, Pinterest has recently become noteworthy for its growth and promotion of visual over textual content. The purpose of this study is to analyze this image-based network in a gender-sensitive fashion, in order to un- derstand (i) user motivation and usage pattern in the network, (ii) how communications and social interactions happen and (iii) how users describe themselves to others. This work is based on more than 220 million items generated by 683,273 users. We were able to find significant differences w.r.t. all mentioned aspects. We observed that, although the network does not encourage direct social communication, females make more use of lightweight interactions than males. Moreover, females invest more effort in reciprocating social links, are more active and generalist in content generation, and describe themselves using words of affection and positive emotions. Males, on the other hand, are more likely to be specialists and tend to describe themselves in an assertive way. We also observed that each gender has different interests in the network, females tend to make more use of the network's commercial capabilities, while males are more prone to the role of curators of items that reflect their personal taste. It is important to understand gender differences in online social networks, so one can design services and applications that leverage human social interactions and provide more targeted and relevant user experiences.",
"We present a study of anonymized data capturing a month of high-level communication activities within the whole of the Microsoft Messenger instant-messaging system. We examine characteristics and patterns that emerge from the collective dynamics of large numbers of people, rather than the actions and characteristics of individuals. The dataset contains summary properties of 30 billion conversations among 240 million people. From the data, we construct a communication graph with 180 million nodes and 1.3 billion undirected edges, creating the largest social network constructed and analyzed to date. We report on multiple aspects of the dataset and synthesized graph. We find that the graph is well-connected and robust to node removal. We investigate on a planetary-scale the oft-cited report that people are separated by \"six degrees of separation\" and find that the average path length among Messenger users is 6.6. We find that people tend to communicate more with each other when they have similar age, language, and location, and that cross-gender conversations are both more frequent and of longer duration than conversations with the same gender.",
"Over the past decade, social network sites have become ubiquitous places for people to maintain relationships, as well as loci of intense research interest. Recently, a new site has exploded into prominence: Pinterest became the fastest social network to reach 10M users, growing 4000 in 2011 alone. While many Pinterest articles have appeared in the popular press, there has been little scholarly work so far. In this paper, we use a quantitative approach to study three research questions about the site. What drives activity on Pinterest? What role does gender play in the site's social connections? And finally, what distinguishes Pinterest from existing networks, in particular Twitter? In short, we find that being female means more repins, but fewer followers, and that four verbs set Pinterest apart from Twitter: use, look, want and need. This work serves as an early snapshot of Pinterest that later work can leverage.",
"Pinterest is a popular social curation site where people collect, organize, and share pictures of items. We studied a fundamental issue for such sites: what patterns of activity attract attention (audience and content reposting)-- We organized our studies around two key factors: the extent to which users specialize in particular topics, and homophily among users. We also considered the existence of differences between female and male users. We found: (a) women and men differed in the types of content they collected and the degree to which they specialized; male Pinterest users were not particularly interested in stereotypically male topics; (b) sharing diverse types of content increases your following, but only up to a certain point; (c) homophily drives repinning: people repin content from other users who share their interests; homophily also affects following, but to a lesser extent. Our findings suggest strategies both for users (e.g., strategies to attract an audience) and maintainers (e.g., content recommendation methods) of social curation sites.",
"Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems.",
""
]
}
|
1403.5206
|
2950170811
|
Tumblr, as one of the most popular microblogging platforms, has gained momentum recently. It is reported to have 166.4 millions of users and 73.4 billions of posts by January 2014. While many articles about Tumblr have been published in major press, there is not much scholar work so far. In this paper, we provide some pioneer analysis on Tumblr from a variety of aspects. We study the social network structure among Tumblr users, analyze its user generated content, and describe reblogging patterns to analyze its user behavior. We aim to provide a comprehensive statistical overview of Tumblr and compare it with other popular social services, including blogosphere, Twitter and Facebook, in answering a couple of key questions: What is Tumblr? How is Tumblr different from other social media networks? In short, we find Tumblr has more rich content than other microblogging platforms, and it contains hybrid characteristics of social networking, traditional blogosphere, and social media. This work serves as an early snapshot of Tumblr that later work can leverage.
|
Lampe @cite_4 did a set of survey studies on Facebook users, and shown that people use Facebook to maintain existing offline connections. Java @cite_1 presented one of the earliest research paper for Twitter, and found that users leverage Twitter to talk their daily activities and to seek or share information. In addition, Schwartz @cite_23 is one of the early studies on Pinterest, and from a statistical point of view that female users repin more but with fewer followers than male users. While Hochman and Raz @cite_0 published an early paper using Instagram data, and indicated differences in local color usage, cultural production rate, for the analysis of location-based visual information flows.
|
{
"cite_N": [
"@cite_1",
"@cite_0",
"@cite_4",
"@cite_23"
],
"mid": [
"2046804949",
"2195691052",
"",
"2094736127"
],
"abstract": [
"Microblogging is a new form of communication in which users can describe their current status in short posts distributed by instant messages, mobile phones, email or the Web. Twitter, a popular microblogging tool has seen a lot of growth since it launched in October, 2006. In this paper, we present our observations of the microblogging phenomena by studying the topological and geographical properties of Twitter's social network. We find that people use microblogging to talk about their daily activities and to seek or share information. Finally, we analyze the user intentions associated at a community level and show how users with similar intentions connect with each other.",
"Picture-taking has never been easier. We now use our phones to snap photos and instantly share them with friends, family and strangers all around the world. Consequently, we seek ways to visualize, analyze and discover concealed socio-cultural characteristics and trends in this ever-growing flow of visual information. How do we then trace global and local patterns from the analysis of visual planetary–scale data? What types of insights can we draw from the study of these massive visual materials? In this study we use Cultural Analytics visualization techniques for the study of approximately 550,000 images taken by users of the location-based social photo sharing application Instagram. By analyzing images from New York City and Tokyo, we offer a comparative visualization research that indicates differences in local color usage, cultural production rate, and varied hue’s intensities— all form a unique, local, ‘Visual Rhythm’: a framework for the analysis of location-based visual information flows.",
"",
"Over the past decade, social network sites have become ubiquitous places for people to maintain relationships, as well as loci of intense research interest. Recently, a new site has exploded into prominence: Pinterest became the fastest social network to reach 10M users, growing 4000 in 2011 alone. While many Pinterest articles have appeared in the popular press, there has been little scholarly work so far. In this paper, we use a quantitative approach to study three research questions about the site. What drives activity on Pinterest? What role does gender play in the site's social connections? And finally, what distinguishes Pinterest from existing networks, in particular Twitter? In short, we find that being female means more repins, but fewer followers, and that four verbs set Pinterest apart from Twitter: use, look, want and need. This work serves as an early snapshot of Pinterest that later work can leverage."
]
}
|
1403.5012
|
1634653321
|
Cloud computing is a newly emerging distributed system which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes, so that the tasks can get a balanced allocation or each task's execution cost decreases to the minimum, or the overall system performance is optimal. Unlike task scheduling based on time or cost before, aiming at the special reliability requirements in cloud computing, we propose a non-cooperative game model for reliability-based task scheduling approach. This model takes the steady-state availability that computing nodes provide as the target, takes the task slicing strategy of the schedulers as the game strategy, then finds the Nash equilibrium solution. And also, we design a task scheduling algorithm based on this model. The experiments can be seen that our task scheduling algorithm is better than the so-called balanced scheduling algorithm.
|
For balanced task scheduling, @cite_11 @cite_12 @cite_17 proposed some models and task scheduling algorithms in distributed system with the market model and game theory. @cite_9 @cite_8 introduced a balanced grid task scheduling model based on non-cooperative game. QoS-based grid job allocation problem is modeled as a cooperative game and the structure of the Nash bargaining solution is given in @cite_30 . In @cite_15 , Wei and Vasilakos presented a game theoretic method to schedule dependent computational cloud computing services with time and cost constrained, in which the tasks are divided into subtasks. The above works generally take the scheduler or job manager as the participant of the game, take the total execution time of tasks as the game optimization goals and give the proof of the existence of the Nash equilibrium solution and the solving Nash equilibrium solution algorithm, or model the task scheduling problem as a cooperative game and give the structure of the cooperative game solution.
|
{
"cite_N": [
"@cite_30",
"@cite_11",
"@cite_8",
"@cite_9",
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"2122144684",
"2138978154",
"2129059987",
"",
"2143611771",
"2132104966",
""
],
"abstract": [
"A grid differs from traditional high performance computing systems in the heterogeneity of the computing nodes as well as the communication links that connect the different nodes together. In grids there exist users and service providers. The service providers provide the service for jobs that the users generate. Typically the amount of jobs generated by all the users are more than any single provider can handle alone with any acceptable quality of service (QoS). As such, the service providers need to cooperate and allocate jobs among them so that each is providing an acceptable QoS to their customers. QoS is of particular concerns to service providers as it directly affects customers' satisfaction and loyalty. In this paper, we propose a game theoretic solution to the QoS sensitive, grid job allocation problem. We model the QoS based, grid job allocation problem as a cooperative game and present the structure of the Nash Bargaining Solution. The proposed algorithm is fair to all users and represents a Pareto optimal solution to the QoS objective. One advantage of our scheme is the relatively low overhead and robust performance against inaccuracies in performance prediction information.",
"We applied techniques from game theory to help formulate and analyze solutions to two systems problems: discouraging selfishness in multi-hop wireless networks and enabling cooperation among ISPs in the Internet. It proved difficult to do so. This paper reports on our experiences and explains the issues that we encountered. It describes the ways in which the straightforward use of results from traditional game theory did not fit well with the requirements of our problems. It also identifies an important characteristic of the solutions we did eventually adopt that distinguishes them from those available using game theoretic approaches. We hope that this discussion will help to highlight formulations of game theory which are well-suited for problems involving computer systems.",
"Load balancing is a very important and complex problem in computational grids. A computational grid differs from traditional high-performance computing systems in the heterogeneity of the computing nodes, as well as the communication links that connect the different nodes together. There is a need to develop algorithms that can capture this complexity yet can be easily implemented and used to solve a wide range of load-balancing scenarios. In this paper, we propose a game-theoretic solution to the grid load-balancing problem. The algorithm developed combines the inherent efficiency of the centralized approach and the fault-tolerant nature of the distributed, decentralized approach. We model the grid load-balancing problem as a noncooperative game, whereby the objective is to reach the Nash equilibrium. Experiments were conducted to show the applicability of the proposed approaches. One advantage of our scheme is the relatively low overhead and robust performance against inaccuracies in performance prediction information.",
"",
"Cloud computing is a natural evolution for data and compute centers with automated systems management, workload balancing, and virtualization technologies. Cloud-based services integrate globally distributed resources into seamless computing platforms. In an open cloud computing framework, scheduling tasks with guaranteeing QoS constrains presents a challenging technical problem. This paper presents a game theoretic method to schedule dependent computational cloud computing services with time and cost constrained. An evolutionary mechanism is designed to fairly and approximately solve the NP-hard scheduling problem.",
"We study the nature of sharing resources in distributed collaborations such as Grids and peer-to-peer systems. By applying the theoretical framework of the multi-person prisoner's dilemma to this resource sharing problem, we show that in the absence of incentive schemes, individual users are apt to hold back resources, leading to decreased system utility. Using both the theoretical framework as well as simulations, we compare and contrast three different incentive schemes aimed at encouraging users to contribute resources. Our results show that soft-incentive schemes are effective in incentivizing autonomous entities to collaborate, leading to increased gains for all participants in the system.",
""
]
}
|
1403.4991
|
2951680431
|
We propose a unifying framework based on configuration linear programs and randomized rounding, for different energy optimization problems in the dynamic speed-scaling setting. We apply our framework to various scheduling and routing problems in heterogeneous computing and networking environments. We first consider the energy minimization problem of scheduling a set of jobs on a set of parallel speed scalable processors in a fully heterogeneous setting. For both the preemptive-non-migratory and the preemptive-migratory variants, our approach allows us to obtain solutions of almost the same quality as for the homogeneous environment. By exploiting the result for the preemptive-non-migratory variant, we are able to improve the best known approximation ratio for the single processor non-preemptive problem. Furthermore, we show that our approach allows to obtain a constant-factor approximation algorithm for the power-aware preemptive job shop scheduling problem. Finally, we consider the min-power routing problem where we are given a network modeled by an undirected graph and a set of uniform demands that have to be routed on integral routes from their sources to their destinations so that the energy consumption is minimized. We improve the best known approximation ratio for this problem.
|
@cite_2 proposed an optimal algorithm for finding a feasible preemptive schedule with minimum energy consumption when a single processor is available.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2099961254"
],
"abstract": [
"The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s sup p where p spl ges 2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type."
]
}
|
1403.4991
|
2951680431
|
We propose a unifying framework based on configuration linear programs and randomized rounding, for different energy optimization problems in the dynamic speed-scaling setting. We apply our framework to various scheduling and routing problems in heterogeneous computing and networking environments. We first consider the energy minimization problem of scheduling a set of jobs on a set of parallel speed scalable processors in a fully heterogeneous setting. For both the preemptive-non-migratory and the preemptive-migratory variants, our approach allows us to obtain solutions of almost the same quality as for the homogeneous environment. By exploiting the result for the preemptive-non-migratory variant, we are able to improve the best known approximation ratio for the single processor non-preemptive problem. Furthermore, we show that our approach allows to obtain a constant-factor approximation algorithm for the power-aware preemptive job shop scheduling problem. Finally, we consider the min-power routing problem where we are given a network modeled by an undirected graph and a set of uniform demands that have to be routed on integral routes from their sources to their destinations so that the energy consumption is minimized. We improve the best known approximation ratio for this problem.
|
The homogeneous multiprocessor case has been solved optimally in polynomial time when both the preemption and the migration of jobs are allowed @cite_22 @cite_16 @cite_20 @cite_0 . @cite_1 considered the homogeneous multiprocessor preemptive problem, where the migration of the jobs is not allowed. They proved that the problem is @math -hard even for instances with common release dates and common deadlines. @cite_19 gave a generic reduction transforming an optimal schedule for the homogeneous multiprocessor problem with migration, to a @math -approximate solution for the homogeneous multiprocessor preemptive problem without migration, where @math is the @math -th Bell number.
|
{
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_20"
],
"mid": [
"2049060423",
"",
"2789021212",
"2126119137",
"1984630032",
""
],
"abstract": [
"We investigate a very basic problem in dynamic speed scaling where a sequence of jobs, each specified by an arrival time, a deadline and a processing volume, has to be processed so as to minimize energy consumption. Previous work has focused mostly on the setting where a single variable-speed processor is available. In this paper we study multi-processor environments with m parallel variable-speed processors assuming that job migration is allowed, i.e. whenever a job is preempted it may be moved to a different processor. We first study the offline problem and show that optimal schedules can be computed efficiently in polynomial time. In contrast to a previously known strategy, our algorithm does not resort to linear programming. We develop a fully combinatorial algorithm that relies on repeated maximum flow computations. The approach might be useful to solve other problems in dynamic speed scaling. For the online problem, we extend two algorithms Optimal Available and Average Rate proposed by [16] for the single processor setting. We prove that Optimal Available is αα-competitive, as in the single processor case. Here α>1 is the exponent of the power consumption function. While it is straightforward to extend Optimal Available to parallel processing environments, the competitive analysis becomes considerably more involved. For Average Rate we show a competitiveness of (3 )α 2 + 2α.",
"",
"An automatic transmission combining a stator reversing type torque converter and a speed changer having a planetary gear train composed of first and second planetary gears sharing one planetary carrier in common and a clutch or brake for controlling the planetary gear train, characterized by that a speed-increasing or speed-decreasing mechanism is installed in course of a shaft which transmits a power from said torque converter to a first sun gear or a second sun gear of the speed changer.",
"This paper investigates the problem of scheduling jobs on multiple speed-scaled processors without migration, i.e., we have constant α > 1 such that running a processor at speed s results in energy consumption s#945; per time unit. We consider the general case where each job has a monotonously increasing cost function that penalizes delay. This includes the so far considered cases of deadlines and flow time. For any type of delay cost functions, we obtain the following results: Any β-approximation algorithm for a single processor yields a randomized βBα-approximation algorithm for multiple processors, where Bα is the αth Bell number, that is, the number of partitions of a set of size α. Analogously, we show that any β-competitive online algorithm for a single processor yields a βBα-competitive online algorithm for multiple processors. Finally, we show that any β-approximation algorithm for multiple processors with migration yields a deterministic βBα-approximation algorithm for multiple processors without migration. These facts improve several approximation ratios and lead to new results. For instance, we obtain the first constant factor online and offline approximation algorithm for multiple processors without migration for arbitrary release times, deadlines, and job sizes. All algorithms are based on the surprising fact that we can remove migration with a blowup of Bα in expectation.",
"In this paper we investigate dynamic speed scaling, a technique to reduce energy consumption in variable-speed microprocessors. While prior research has focused mostly on single processor environments, in this paper we investigate multiprocessor settings. We study the basic problem of scheduling a set of jobs, each specified by a release date, a deadline and a processing volume, on variable-speed processors so as to minimize the total energy consumption.",
""
]
}
|
1403.4991
|
2951680431
|
We propose a unifying framework based on configuration linear programs and randomized rounding, for different energy optimization problems in the dynamic speed-scaling setting. We apply our framework to various scheduling and routing problems in heterogeneous computing and networking environments. We first consider the energy minimization problem of scheduling a set of jobs on a set of parallel speed scalable processors in a fully heterogeneous setting. For both the preemptive-non-migratory and the preemptive-migratory variants, our approach allows us to obtain solutions of almost the same quality as for the homogeneous environment. By exploiting the result for the preemptive-non-migratory variant, we are able to improve the best known approximation ratio for the single processor non-preemptive problem. Furthermore, we show that our approach allows to obtain a constant-factor approximation algorithm for the power-aware preemptive job shop scheduling problem. Finally, we consider the min-power routing problem where we are given a network modeled by an undirected graph and a set of uniform demands that have to be routed on integral routes from their sources to their destinations so that the energy consumption is minimized. We improve the best known approximation ratio for this problem.
|
@cite_5 studied the min-power routing problem and for uniform demands, i.e. for the case where all the demands have the same value, they proposed a @math -approximation algorithm, where @math , with @math . For non-uniform demands, they proposed a @math -approximation algorithm, where @math is the maximum value of the demands.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2010781926"
],
"abstract": [
"We study network optimization that considers power minimization as an objective. Studies have shown that mechanisms such as speed scaling can significantly reduce the power consumption of telecommunication networks by matching the consumption of each network element to the amount of processing required for its carried traffic. Most existing research on speed scaling focuses on a single network element in isolation. We aim for a network-wide optimization. Specifically, we study a routing problem with the objective of provisioning guaranteed speed bandwidth for a given demand matrix while minimizing power consumption. Optimizing the routes critically relies on the characteristic of the speed-power curve f(s), which is how power is consumed as a function of the processing speed s. If f is superadditive, we show that there is no bounded approximation in general for integral routing, i.e., each traffic demand follows a single path. This contrasts with the well-known logarithmic approximation for subadditive functions. However, for common speed-power curves such as polynomials f(s) = μsα, we are able to show a constant approximation via a simple scheme of randomized rounding. We also generalize this rounding approach to handle the case in which a nonzero startup cost σ appears in the speed-power curve, i.e., f(s) = σ + μsα, if s >; 0; 0, if s = 0. We present an O((σ μ)1 α)-approximation, and we discuss why coming up with an approximation ratio independent of the startup cost may be hard. Finally, we provide simulation results to validate our algorithmic approaches."
]
}
|
1403.4415
|
1629472610
|
The prediction of graph evolution is an important and challenging problem in the analysis of networks and of the Web in particular. But while the appearance of new links is part of virtually every model of Web growth, the disappearance of links has received much less attention in the literature. To fill this gap, our approach DecLiNe (an acronym for DECay of LInks in NEtworks) aims to predict link decay in networks, based on structural analysis of corresponding graph models. In analogy to the link prediction problem, we show that analysis of graph structures can help to identify indicators for superfluous links under consideration of common network models. In doing so, we introduce novel metrics that denote the likelihood of certain links in social graphs to remain in the network, and combine them with state-of-the-art machine learning methods for predicting link decay. Our methods are independent of the underlying network type, and can be applied to such diverse networks as the Web, social networks and any other structure representable as a network, and can be easily combined with case-specific content analysis and adopted for a variety of social network mining, filtering and recommendation applications. In systematic evaluations with large-scale datasets of Wikipedia we show the practical feasibility of the proposed structure-based link decay prediction algorithms.
|
& Spurious links & -- & -- & @cite_7 @cite_16 & Spam & @cite_20 & -- & @cite_20 & Disconnection & -- & @cite_19 & -- Decay of & Reverts & @cite_17 @cite_6 & @cite_17 & -- , Web links & Link removal & -- & -- &
|
{
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_19",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"1994321911",
"2118157326",
"2118775926",
"2056299517",
"202878612",
""
],
"abstract": [
"Network analysis is currently used in a myriad of contexts, from identifying potential drug targets to predicting the spread of epidemics and designing vaccination strategies and from finding friends to uncovering criminal activity. Despite the promise of the network approach, the reliability of network data is a source of great concern in all fields where complex networks are studied. Here, we present a general mathematical and computational framework to deal with the problem of data reliability in complex networks. In particular, we are able to reliably identify both missing and spurious interactions in noisy network observations. Remarkably, our approach also enables us to obtain, from those noisy observations, network reconstructions that yield estimates of the true network properties that are more accurate than those provided by the observations themselves. Our approach has the potential to guide experiments, to better characterize network data sets, and to drive new discoveries.",
"Wikipedia's remarkable success in aggregating millions of contributions can pose a challenge for current editors, whose hard work may be reverted unless they understand and follow established norms, policies, and decisions and avoid contentious or proscribed terms. We present a machine learning model for predicting whether a contribution will be reverted based on word level features. Unlike previous models relying on editor-level characteristics, our model can make accurate predictions based only on the words a contribution changes. A key advantage of the model is that it can provide feedback on not only whether a contribution is likely to be rejected, but also the particular words that are likely to be controversial, enabling new forms of intelligent interfaces and visualizations. We examine the performance of the model across a variety of Wikipedia articles.",
"We're investigating a specific pervasive architecture that can maintain continuous connections among MANET devices. We're targeting this architecture for computer-supported-cooperative-work (CSCW) and workflow management applications that would constitute the coordination layer. The basic problem of such an architecture is, how do you predict possible disconnections of devices, to let the coordination layer appropriately address connection anomalies? To solve this problem, we've developed a technique for predicting disconnections in MANETS. This technique serves as the basic layer of an innovative pervasive architecture for cooperative work and activity coordination in MANETS. We believe that in emergency scenarios, our proposed pervasive architecture can provide more effective coordination among team members.",
"Identifying and removing spurious links in complex networks is meaningful for many real applications and is crucial for improving the reliability of network data, which, in turn, can lead to a better understanding of the highly interconnected nature of various social, biological, and communication systems. In this paper, we study the features of different simple spurious link elimination methods, revealing that they may lead to the distortion of networks’ structural and dynamical properties. Accordingly, we propose a hybrid method that combines similarity-based index and edge-betweenness centrality. We show that our method can effectively eliminate the spurious interactions while leaving the network connected and preserving the network's functionalities.",
"",
""
]
}
|
1403.4415
|
1629472610
|
The prediction of graph evolution is an important and challenging problem in the analysis of networks and of the Web in particular. But while the appearance of new links is part of virtually every model of Web growth, the disappearance of links has received much less attention in the literature. To fill this gap, our approach DecLiNe (an acronym for DECay of LInks in NEtworks) aims to predict link decay in networks, based on structural analysis of corresponding graph models. In analogy to the link prediction problem, we show that analysis of graph structures can help to identify indicators for superfluous links under consideration of common network models. In doing so, we introduce novel metrics that denote the likelihood of certain links in social graphs to remain in the network, and combine them with state-of-the-art machine learning methods for predicting link decay. Our methods are independent of the underlying network type, and can be applied to such diverse networks as the Web, social networks and any other structure representable as a network, and can be easily combined with case-specific content analysis and adopted for a variety of social network mining, filtering and recommendation applications. In systematic evaluations with large-scale datasets of Wikipedia we show the practical feasibility of the proposed structure-based link decay prediction algorithms.
|
The evolution of networks such as the Web is subject to many models such as preferential attachment @cite_18 or the spectral evolution model @cite_15 , most of which only model the addition of edges over time. The evolution of the Web hyperlink graph in particular has been studied too, for instance in @cite_23 . The evolution of the Wikipedia hyperlink graph has been studied in 2006 @cite_5 . The concrete problem consisting of predicting the appearance of new links in networks is called @cite_24 . Unlike link prediction, the prediction of link disappearance has been investigated only very little.
|
{
"cite_N": [
"@cite_18",
"@cite_24",
"@cite_23",
"@cite_5",
"@cite_15"
],
"mid": [
"2008620264",
"2148847267",
"101653124",
"2116021465",
"1999427600"
],
"abstract": [
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc.",
"The Web is characterized by an extremely dynamic nature, as it is proved by the rapid and significant growth it has experimented in the last decade and by its continuous evolution through creation or deletion of pages and hyperlinks. Consequently, analyzing the temporal evolution of the Web has become a crucial task that can provide search engines with valuable information for refining crawling policies, improving ranking models or detecting spam. Understanding how the Web evolves over time is a delicate challenge that requires to integrate theoretical efforts of modelization and empirical results. Obtaining such findings is very expensive in terms of bandwidth, computation time and human intervention. Robust software is required to gather the data and provide easy access to the collected information. Apart for commercial engines, there have been only a few attempts to perform such task and to make the data available. Several previous works (e.g., ntoulas2004what,toyoda2006what ) analyzed a sequence of crawls, studying degree and frequency of change both in the content of Web pages and in the hyperlink structure to propose measures of their novelty. In this paper we study a temporal dataset BSVLTAG that has recently been made public. It is made of twelve 100M pages snapshots of the .uk domain. The Web graphs of the single snapshots have been merged into a global graph with labels that provide constant time access to temporal information. bordino2008temporal describes the work done to assess the data and to study some aspects of its evolution at the level of Web pages. We now analyze the structure of this huge time-aware graph at the level of interconnection between hosts. We study the , i.e., the graph in which every node corresponds to a site, whereas a directed edge represents the existence of hyperlinks between pages belonging to two different hosts. Understanding the graph structure of the Web at this macroscopic level can provide valuable insights for improving Web site accessibility and navigation, or discovering related hosts. The notion of hostgraph was proposed by bharat2001who . A few studies analyzed hostgraphs ( baeza2005spain,liu2005china,toyoda2008thai ). baeza2007domain presents a comparison among the results of twelve characterization studies of several national domains.",
"Wikipedia is an online encyclopedia, available in more than 100 languages and comprising over 1 million articles in its English version. If we consider each Wikipedia article as a node and each hyperlink between articles as an arc we have a \"Wikigraph\", a graph that represents the link structure of Wikipedia. The Wikigraph differs from other Web graphs studied in the literature by the fact that there are explicit timestamps associated with each node?s events. This allows us to do a detailed analysis of the Wikipedia evolution over time. In the first part of this study we characterize this evolution in terms of users, editions and articles; in the second part, we depict the temporal evolution of several topological properties of the Wikigraph. The insights obtained from the Wikigraphs can be applied to large Web graphs from which the temporal data is usually not available.",
"We introduce and study the spectral evolution model, which characterizes the growth of large networks in terms of the eigenvalue decomposition of their adjacency matrices: In large networks, changes over time result in a change of a graph's spectrum, leaving the eigenvectors unchanged. We validate this hypothesis for several large social, collaboration, authorship, rating, citation, communication and tagging networks, covering unipartite, bipartite, signed and unsigned graphs. Following these observations, we introduce a link prediction algorithm based on the extrapolation of a network's spectral evolution. This new link prediction method generalizes several common graph kernels that can be expressed as spectral transformations. In contrast to these graph kernels, the spectral extrapolation algorithm does not make assumptions about specific growth patterns beyond the spectral evolution model. We thus show that it performs particularly well for networks with irregular, but spectral, growth patterns."
]
}
|
1403.4415
|
1629472610
|
The prediction of graph evolution is an important and challenging problem in the analysis of networks and of the Web in particular. But while the appearance of new links is part of virtually every model of Web growth, the disappearance of links has received much less attention in the literature. To fill this gap, our approach DecLiNe (an acronym for DECay of LInks in NEtworks) aims to predict link decay in networks, based on structural analysis of corresponding graph models. In analogy to the link prediction problem, we show that analysis of graph structures can help to identify indicators for superfluous links under consideration of common network models. In doing so, we introduce novel metrics that denote the likelihood of certain links in social graphs to remain in the network, and combine them with state-of-the-art machine learning methods for predicting link decay. Our methods are independent of the underlying network type, and can be applied to such diverse networks as the Web, social networks and any other structure representable as a network, and can be easily combined with case-specific content analysis and adopted for a variety of social network mining, filtering and recommendation applications. In systematic evaluations with large-scale datasets of Wikipedia we show the practical feasibility of the proposed structure-based link decay prediction algorithms.
|
Several graph growth models include link disappearance in addition to link creation, for instance in a model to explain power laws @cite_26 . Other examples can be found in @cite_0 and @cite_3 , in which a model for growth of the Web is given in which edges are removed before others are added. While these methods succeed in predicting global characteristics of networks such as the degree distribution, they do not model the structure of the network, and thus cannot be used for predicting individual links.
|
{
"cite_N": [
"@cite_0",
"@cite_26",
"@cite_3"
],
"mid": [
"2963605078",
"2112566225",
"2129620481"
],
"abstract": [
"Power law distribution seems to be an important characteristic of web graphs. Several existing web graph models generate power law graphs by adding new vertices and non-uniform edge connectivities to existing graphs. Researchers have conjectured that preferential connectivity and incremental growth are both required for the power law distribution. In this paper, we propose a different web graph model with power law distribution that does not require incremental growth. We also provide a comparison of our model with several others in their ability to predict web graph clustering behavior.",
"We investigate the general conditions under which power laws emerge in networks for the degree distributions (the number of links a node has). Our study is based on a new and versatile random-walk network model (the exciton model) that includes all processes of link creation, link removal, node creation and node loss. From the principle of detailed balance simple 'litmus' test criteria for the emergence of power laws are derived. Results are compared with existing models in the network science literature, and we show how they can be generalized. An important result is that there is a very broad set of conditions under which power laws will emerge, among them nonlinearity in network formation. We show that power laws may be explained purely as a mesoscopic statistical phenomenon, on the basis of the scale-free network statistics assumption of equiprobability of existing nodes plus links. Hence, explanations rooted in making (microscopic) assumptions about individual preferences or behaviour can be avoided. The causal mechanism underlying this scale-free network statistics is, we suggest, the social and self-reinforcing mechanism of information feedback to network actors about structure and status of the network itself.",
"The pages and hyperlinks of the World-Wide Web may be viewed as nodes and edges in a directed graph. This graph is a fascinating object of study: it has several hundred million nodes today, over a billion links, and appears to grow exponentially with time. There are many reasons -- mathematical, sociological, and commercial -- for studying the evolution of this graph. In this paper we begin by describing two algorithms that operate on the Web graph, addressing problems from Web search and automatic community discovery. We then report a number of measurements and properties of this graph that manifested themselves as we ran these algorithms on the Web. Finally, we observe that traditional random graph models do not explain these observations, and we propose a new family of random graph models. These models point to a rich new sub-field of the study of random graphs, and raise questions about the analysis of graph algorithms on the Web."
]
}
|
1403.4415
|
1629472610
|
The prediction of graph evolution is an important and challenging problem in the analysis of networks and of the Web in particular. But while the appearance of new links is part of virtually every model of Web growth, the disappearance of links has received much less attention in the literature. To fill this gap, our approach DecLiNe (an acronym for DECay of LInks in NEtworks) aims to predict link decay in networks, based on structural analysis of corresponding graph models. In analogy to the link prediction problem, we show that analysis of graph structures can help to identify indicators for superfluous links under consideration of common network models. In doing so, we introduce novel metrics that denote the likelihood of certain links in social graphs to remain in the network, and combine them with state-of-the-art machine learning methods for predicting link decay. Our methods are independent of the underlying network type, and can be applied to such diverse networks as the Web, social networks and any other structure representable as a network, and can be easily combined with case-specific content analysis and adopted for a variety of social network mining, filtering and recommendation applications. In systematic evaluations with large-scale datasets of Wikipedia we show the practical feasibility of the proposed structure-based link decay prediction algorithms.
|
For social networks, most studies focus on non-structural reasons for the disappearance of links, such as interactions between people. Examples are the removal of friendships on Facebook ( unfriending'') @cite_4 , and the removal of follow links on Twitter ( unfollowing'') @cite_14 @cite_9 . A recent study @cite_11 finds that the most common reason for unfriending on Facebook is over political opinions. In all these works, the only structural indicators used for predicting the disappearance of social links are the number of common neighbors in @cite_8 , @cite_9 and @cite_4 . All three studies find that links connecting nodes with many common neighbors are less likely to be deleted from the social graph, and that content and interaction features are more predictive for link disappearance in social networks. Since the form of content and in particular interaction is fundamentally different among people than hyperlinks between pages, these methods cannot be generalized to predict the disappearance of hyperlinks.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_11"
],
"mid": [
"2112130747",
"2054840617",
"2139422417",
"2186969405",
""
],
"abstract": [
"We analyze the dynamics of the behavior known as 'unfollow' in Twitter. We collected daily snapshots of the online relationships of 1.2 million Korean-speaking users for 51 days as well as all of their tweets. We found that Twitter users frequently unfollow. We then discover the major factors, including the reciprocity of the relationships, the duration of a relationship, the followees' informativeness, and the overlap of the relationships, which affect the decision to unfollow. We conduct interview with 22 Korean respondents to supplement the quantitative results. They unfollowed those who left many tweets within a short time, created tweets about uninteresting topics, or tweeted about the mundane details of their lives. To the best of our knowledge, this work is the first systematic study of the unfollow behavior in Twitter.",
"Recent analyses of self-reported data (mainly survey data) seem to suggest that social rules for ending relationships are transformed on Facebook. There seem to be a radical difference between offline and online worlds: reasons for ending online relationships are different than those for ending offline ones. These preliminary findings are, however, not supported by any quantitative evidence, and that is why we put them to test. We consider a variety of factors (e.g., age, gender, personality traits) that studies in sociology have found to be associated with friendship dissolution in the real world and study whether these factors are still important in the context of Facebook. Upon analyzing 34,012 Facebook relationships, we found that, on average, a relationship is more likely to break if it is not embedded in the same social circle, if it is between two people whose ages differ, and if one of the two is neurotic or introvert. Interestingly, we also found that a relationship with a common female friend is more robust than that with a common male friend. These findings are in line with previous analyses of another popular social-networking platform, that of Twitter. All this goes to suggest that there is not much difference between offline and online worlds and, given this predictability, one could easily build tools for monitoring online relations.",
"We investigate the breaking of ties between individuals in the online social network of Twitter, a hugely popular social media service. Building on sociology concepts such as strength of ties, embeddedness, and status, we explore how network structure alone influences tie breaks - the common phenomena of an individual ceasing to \"follow\" another in Twitter's directed social network. We examine these relationships using a dataset of 245,586 Twitter \"follow\" edges, and the persistence of these edges after nine months. We show that structural properties of individuals and dyads at Time 1 have a significant effect on the existence of edges at Time 2, and connect these findings to the social theories that motivated the study.",
"We propose a logistic regression model taking into account two analytically different sets of factors–structure and action. The factors include individual, dyadic, and triadic properties between ego and alter whose tie breakup is under consideration. From the fitted model using a large-scale data, we discover 5 structural and 7 actional variables to have significant explanatory power for unfollow. One unique finding from our quantitative analysis is that people appreciate receiving acknowledgements from others even in virtually unilateral communication relationships and are less likely to unfollow them: people are more of a receiver than a giver.",
""
]
}
|
1403.4415
|
1629472610
|
The prediction of graph evolution is an important and challenging problem in the analysis of networks and of the Web in particular. But while the appearance of new links is part of virtually every model of Web growth, the disappearance of links has received much less attention in the literature. To fill this gap, our approach DecLiNe (an acronym for DECay of LInks in NEtworks) aims to predict link decay in networks, based on structural analysis of corresponding graph models. In analogy to the link prediction problem, we show that analysis of graph structures can help to identify indicators for superfluous links under consideration of common network models. In doing so, we introduce novel metrics that denote the likelihood of certain links in social graphs to remain in the network, and combine them with state-of-the-art machine learning methods for predicting link decay. Our methods are independent of the underlying network type, and can be applied to such diverse networks as the Web, social networks and any other structure representable as a network, and can be easily combined with case-specific content analysis and adopted for a variety of social network mining, filtering and recommendation applications. In systematic evaluations with large-scale datasets of Wikipedia we show the practical feasibility of the proposed structure-based link decay prediction algorithms.
|
A related problem is the identification of spurious links, i. ,e., links that have been erroneously observed @cite_7 @cite_16 . A related area of research is the detection of link spam on the Web, in which links are to be detected @cite_20 . Similarly, the disconnection of nodes has been predicted in mobile ad-hoc networks @cite_19 . These problems are structurally similar to the problem studied in this paper, but do not use features that are typical for link prediction such as the degree of nodes or the number of common neighbors.
|
{
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_20",
"@cite_7"
],
"mid": [
"2118775926",
"2056299517",
"202878612",
"1994321911"
],
"abstract": [
"We're investigating a specific pervasive architecture that can maintain continuous connections among MANET devices. We're targeting this architecture for computer-supported-cooperative-work (CSCW) and workflow management applications that would constitute the coordination layer. The basic problem of such an architecture is, how do you predict possible disconnections of devices, to let the coordination layer appropriately address connection anomalies? To solve this problem, we've developed a technique for predicting disconnections in MANETS. This technique serves as the basic layer of an innovative pervasive architecture for cooperative work and activity coordination in MANETS. We believe that in emergency scenarios, our proposed pervasive architecture can provide more effective coordination among team members.",
"Identifying and removing spurious links in complex networks is meaningful for many real applications and is crucial for improving the reliability of network data, which, in turn, can lead to a better understanding of the highly interconnected nature of various social, biological, and communication systems. In this paper, we study the features of different simple spurious link elimination methods, revealing that they may lead to the distortion of networks’ structural and dynamical properties. Accordingly, we propose a hybrid method that combines similarity-based index and edge-betweenness centrality. We show that our method can effectively eliminate the spurious interactions while leaving the network connected and preserving the network's functionalities.",
"",
"Network analysis is currently used in a myriad of contexts, from identifying potential drug targets to predicting the spread of epidemics and designing vaccination strategies and from finding friends to uncovering criminal activity. Despite the promise of the network approach, the reliability of network data is a source of great concern in all fields where complex networks are studied. Here, we present a general mathematical and computational framework to deal with the problem of data reliability in complex networks. In particular, we are able to reliably identify both missing and spurious interactions in noisy network observations. Remarkably, our approach also enables us to obtain, from those noisy observations, network reconstructions that yield estimates of the true network properties that are more accurate than those provided by the observations themselves. Our approach has the potential to guide experiments, to better characterize network data sets, and to drive new discoveries."
]
}
|
1403.4521
|
2098462599
|
Preferential attachment (PA) models of network structure are widely used due to their explanatory power and conceptual simplicity. PA models are able to account for the scale-free degree distributions observed in many real-world large networks through the remarkably simple mechanism of sequentially introducing nodes that attach preferentially to high-degree nodes. The ability to efficiently generate instances from PA models is a key asset in understanding both the models themselves and the real networks that they represent. Surprisingly, little attention has been paid to the problem of efficient instance generation. In this paper, we show that the complexity of generating network instances from a PA model depends on the preference function of the model, provide efficient data structures that work under any preference function, and present empirical results from an implementation based on these data structures. We demonstrate that, by indexing growing networks with a simple augmented heap, we can implement a network generator which scales many orders of magnitude beyond existing capabilities ( @math -- @math nodes). We show the utility of an efficient and general PA network generator by investigating the consequences of varying the preference functions of an existing model. We also provide "quicknet", a freely-available open-source implementation of the methods described in this work.
|
This work is concerned with the problem of efficiently generating networks from PA models. Some examples of PA models include the models of Price (directed networks with scale-free in-degrees) @cite_6 , Barabasi and Albert (undirected networks with scale-free degrees) @cite_2 , (directed networks with non-independent in and out-degrees which exhibit marginally scale-free behavior) @cite_4 , and (like Krapivsky's model, but with reciprocation) @cite_1 .
|
{
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_6",
"@cite_2"
],
"mid": [
"2099674897",
"1994491374",
"2080450835",
"2008620264"
],
"abstract": [
"We present an analysis of the statistical properties and growth of the free on-line encyclopedia Wikipedia. By describing topics by vertices and hyperlinks between them as edges, we can represent this encyclopedia as a directed graph. The topological properties of this graph are in close analogy with those of the World Wide Web, despite the very different growth mechanism. In particular, we measure a scale-invariant distribution of the in and out degree and we are able to reproduce these features by means of a simple statistical model. As a major consequence, Wikipedia growth can be described by local rules such as the preferential attachment mechanism, though users, who are responsible of its evolution, can act globally on the network.",
"The in-degree and out-degree distributions of a growing network model are determined. The in-degree is the number of incoming links to a given node (and vice versa for out-degree). The network is built by (i) creation of new nodes which each immediately attach to a preexisting node, and (ii) creation of new links between preexisting nodes. This process naturally generates correlated in-degree and out-degree distributions. When the node and link creation rates are linear functions of node degree, these distributions exhibit distinct power-law forms. By tuning the parameters in these rates to reasonable values, exponents which agree with those of the web graph are obtained.",
"A Cumulative Advantage Distribution is proposed which models statistically the situation in which success breeds success. It differs from the Negative Binomial Distribution in that lack of success, being a non-event, is not punished by increased chance of failure. It is shown that such a stochastic law is governed by the Beta Function, containing only one free parameter, and this is approximated by a skew or hyperbolic distribution of the type that is widespread in bibliometrics and diverse social science phenomena. In particular, this is shown to be an appropriate underlying probabilistic theory for the Bradford Law, the Lotka Law, the Pareto and Zipf Distributions, and for all the empirical results of citation frequency analysis. As side results one may derive also the obsolescence factor for literature use. The Beta Function is peculiarly elegant for these manifold purposes because it yields both the actual and the cumulative distributions in simple form, and contains a limiting case of an inverse square law to which many empirical distributions conform.",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems."
]
}
|
1403.4521
|
2098462599
|
Preferential attachment (PA) models of network structure are widely used due to their explanatory power and conceptual simplicity. PA models are able to account for the scale-free degree distributions observed in many real-world large networks through the remarkably simple mechanism of sequentially introducing nodes that attach preferentially to high-degree nodes. The ability to efficiently generate instances from PA models is a key asset in understanding both the models themselves and the real networks that they represent. Surprisingly, little attention has been paid to the problem of efficient instance generation. In this paper, we show that the complexity of generating network instances from a PA model depends on the preference function of the model, provide efficient data structures that work under any preference function, and present empirical results from an implementation based on these data structures. We demonstrate that, by indexing growing networks with a simple augmented heap, we can implement a network generator which scales many orders of magnitude beyond existing capabilities ( @math -- @math nodes). We show the utility of an efficient and general PA network generator by investigating the consequences of varying the preference functions of an existing model. We also provide "quicknet", a freely-available open-source implementation of the methods described in this work.
|
There has been some prior work in efficiently generating networks from PA models. @cite_9 provide a method for computing an iteration of the linear Yule-Simon cumulative advantage process in constant time. This method can naturally be extended to network generation through linear PA. However, the extension to nonlinear PA (not shown due to space constraints) is very inefficient in both time and space. Ren and Li @cite_18 describe the simulation of a particular linear PA model, RX, but do not address the general problem of simulating networks from models with general preference functions. @cite_5 and D'Angelo and Ferreti @cite_13 provide methods for parallelizing the simulation of linear PA, but do not treat the nonlinear case. To the best of our knowledge, our work is the first to address the efficient generation from PA models under possibly nonlinear preference functions.
|
{
"cite_N": [
"@cite_13",
"@cite_5",
"@cite_9",
"@cite_18"
],
"mid": [
"2127024523",
"2121535328",
"22604616",
"2109465629"
],
"abstract": [
"In this paper, we present a new simulation tool for scale-free networks composed of a high number of nodes. The tool, based on discrete-event simulation, enables the definition of scale-free networks composed of heterogeneous nodes and complex application-level protocols. To satisfy the performance and scalability requirements, the simulator supports both sequential (i.e. monolithic) and parallel distributed (i.e. PADS) approaches. Furthermore, appropriate mechanisms for the communication overhead-reduction are implemented. To demonstrate the efficiency of the tool, we experiment with gossip protocols on top of scale-free networks generated by our simulator. Results of the simulations demonstrate the feasibility of our approach. The proposed tool is able to generate and manage large scale-free networks composed of thousands of nodes interacting following real-world dissemination protocols.",
"Evolution and structure of very large networks has attracted considerable attention in recent years. In this paper we study a possibility to simulate stochastic processes which move edges in a network leading to a scale-free structure. Scale-free networks are characterized by a ''fat-tail'' degree distribution with considerably higher presence of so called hubs - nodes with very high degree. To understand and predict very large networks it is important to study the possibility of parallel simulation. We consider a class of stochastic processes which keeps the number of edges in the network constant called equilibrium networks. This class is characterized by a preferential selection where the edge destinations are chosen according to a preferential function f(k) which depends on the node degree k. For this class of stochastic processes we prove that it is difficult if not impossible to design an exact parallel algorithm if the function f(k) is monotonous with an injective derivative. However, in the important case where f(k) is linear we present a fully scalable algorithm with almost linear speedup. The experimental results confirm the linear scalability on a large processor cluster.",
"We describe how different optimized algorithms may be effective in implementing the preferential attachment mechanism in different cases. We analyze performances with respect to different values of some parameters related to the Yule process associated to the preferential attachment. We examine how performance scales with system size and provide extensive simulations to support our theoretical findings.",
"With the continuous expansion of the network size, some networks gradually present the characteristics of social networks, especially, scale-free networks. At present, some theoretical studies of scale-free networks are mainly based on simulated platforms. Based on this, the question that how we fast simulate a network model possessing scale-free effects and presenting related characteristics of realistic networks in unbiased manner becomes a crucial problem. In the paper, we propose an algorithm, named RX, which can fast locate nodes to simulate a network. Then, we will analysis the characteristics of the network by experiments and compare RX's modeling time with BA's. (BA has become a popular algorithm in simulating scale-free networks.) The characteristics and the comparison can effectively prove that the algorithm we design can fast simulate scale-free networks to provide a good reference in building network models."
]
}
|
1403.4320
|
2024019662
|
We extend the theory of Thom spectra and the associated obstruction theory for orientations in order to support the construction of the string orientation of tmf, the spectrum of topological modular forms. We also develop the analogous theory of Thom spectra and orientations for associative ring spectra. Our work is based on a new model of the Thom spectrum as a derived smash product. An earlier version of this paper was part of arXiv:0810.4535.
|
In his 1970 MIT notes @cite_23 , In the version available at http: www.maths.ed.ac.uk aar books gtop.pdf , see the note on page 236. Sullivan introduced the classical obstruction theory for orientations and suggested that Dold's theory of homotopy functors @cite_21 could be used to construct the space @math of @math -oriented spherical fibrations. He also mentioned that the technology to construct the delooping @math was on its way. Soon thereafter, May, Quinn, Ray, and Tornehave in @cite_20 constructed the space @math in the case that @math is an @math ring spectrum, and described the associated obstruction theory for orientations of spherical fibrations.
|
{
"cite_N": [
"@cite_21",
"@cite_20",
"@cite_23"
],
"mid": [
"",
"1486044528",
"636313037"
],
"abstract": [
"",
"? functors.- Coordinate-free spectra.- Orientation theory.- E? ring spectra.- On kO-oriented bundle theories.- E? ring spaces and bipermutative categories.- The recognition principle for E? ring spaces.- Algebraic and topological K-theory.- Pairings in infinite loop space theory.",
"Algebraic Constructions.- Homotopy Theoretical Localization.- Completions in Homotopy Theory.- Spherical Fibrations.- Algebraic Geometry.- The Galois Group in Geometric Topology."
]
}
|
1403.4320
|
2024019662
|
We extend the theory of Thom spectra and the associated obstruction theory for orientations in order to support the construction of the string orientation of tmf, the spectrum of topological modular forms. We also develop the analogous theory of Thom spectra and orientations for associative ring spectra. Our work is based on a new model of the Thom spectrum as a derived smash product. An earlier version of this paper was part of arXiv:0810.4535.
|
Various aspects of the theory of units and Thom spectra have been revisited by a number of authors as the foundations of stable homotopy theory have advanced. For example, Schlichtkrull @cite_10 studied the units of a symmetric ring spectrum, and May and Sigurdsson @cite_30 have studied units and orientations from the perspective of their categories of parametrized spectra. Recently May has prepared an authoritative paper revisiting operad (ring) spaces and operad (ring) spectra from a modern perspective @cite_7 , which has substantial overlap with some of the review of the classical foundations in .
|
{
"cite_N": [
"@cite_30",
"@cite_10",
"@cite_7"
],
"mid": [
"1822242587",
"2124463956",
"2013924646"
],
"abstract": [
"Prologue Point-set topology, change functors, and proper actions: Introduction to Part I The point-set topology of parametrized spaces Change functors and compatibility relations Proper actions, equivariant bundles and fibrations Model categories and parametrized spaces: Introduction to Part II Topologically bicomplete model categories Well-grounded topological model categories The @math -model structure on @math Equivariant @math -type model structures Ex-fibrations and ex-quasifibrations The equivalence between Ho @math and @math Parametrized equivariant stable homotopy theory: Introduction to Part III Enriched categories and @math -categories The category of orthogonal @math -spectra over @math Model structures for parametrized @math -spectra Adjunctions and compatibility relations Module categories, change of universe, and change of groups Parametrized duality theory: Introduction to Part IV Fiberwise duality and transfer maps Closed symmetric bicategories The closed symmetric bicategory of parametrized spectra Costenoble-Waner duality Fiberwise Costenoble-Waner duality Homology and cohomology, Thom spectra, and addenda: Introduction to Part V Parametrized homology and cohomology theories Equivariant parametrized homology and cohomology Twisted theories and spectral sequences Parametrized FSP's and generalized Thom spectra Epilogue: Cellular philosophy and alternative approaches Bibliography Index Index of notation.",
"Let GL1(R) be the units of a commutative ring spectrum R. In this paper we identify the composition",
"E∞ ring spectra were defined in 1972, but the term has since acquired several alternative meanings. The same is true of several related terms. The new formulations are not always known to be equivalent to the old ones and even when they are, the notion of “equivalence” needs discussion: Quillen equivalent categories can be quite seriously inequivalent. Part of the confusion stems from a gap in the modern resurgence of interest in E∞ structures. E∞ ring spaces were also defined in 1972 and have never been redefined. They were central to the early applications and they tie in implicitly to modern applications. We summarize the relationships between the old notions and various new ones, explaining what is and is not known. We take the opportunity to rework and modernize many of the early results. New proofs and perspectives are sprinkled throughout."
]
}
|
1403.4053
|
2141926550
|
Enterprise Integration Patterns (EIP) are a collection of widely used stencils for integrating enterprise applications and business processes. These patterns represent a "de-facto" standard reference for design decisions when integrating enterprise applications. For each of these patterns we present the integration semantics (model) and the conceptual translation (syntax) to the Business Process Model and Notation (BPMN), which is a "de-facto" standard for modelling business process semantics and their runtime behavior.
|
Our approach stresses on the control flow, data flow and modeling capabilities of BPMN as well as its execution semantics. Recent work on Data in Business Processes" @cite_7 shows that besides (COREPRO) @cite_2 @cite_3 @cite_13 , which mainly deals with data-driven process modeling and (business) object status management, and UML activity diagrams, BPMN achieves the highest coverage in the categories relevant for our approach. Compared to BPMN and apart from the topic of object state" representation, neither @cite_4 nor petri nets do support data modeling at all @cite_7 . Based on that work, BPMN was further evaluated with respect to data dependencies within BPMN processes @cite_11 @cite_9 , however, not towards control and data flow as in our approach.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_2",
"@cite_13",
"@cite_11"
],
"mid": [
"2129466958",
"2397724790",
"183524701",
"191569497",
"2157302999",
"1503557010",
"97493623"
],
"abstract": [
"Workflow management promises a new solution to an age-old problem: controlling, monitoring, optimizing and supporting business processes. What is new about workflow management is the explicit representation of the business process logic which allows for computerized support. This paper discusses the use of Petri nets in the context of workflow management. Petri nets are an established tool for modeling and analyzing processes. On the one hand, Petri nets can be used as a design language for the specification of complex workflows. On the other hand, Petri net theory provides for powerful analysis techniques which can be used to verify the correctness of workflow procedures. This paper introduces workflow management as an application domain for Petri nets, presents state-of-the-art results with respect to the verification of workflows, and highlights some Petri-net-based workflow tools.",
"Process and data are equally important for business process management. Process data is especially relevant in the context of automated business processes, process controlling, and representation of organizations’ core assets. One can discover many process modeling languages, each having a specific set of data modeling capabilities and the level of data awareness. The level of data awareness and data modeling capabilities vary significantly from one language to another. This paper evaluates several process modeling languages with respect to the role of data. To find a common ground for comparison, we develop a framework, which systematically organizes processand data-related aspects of the modeling languages elaborating on the data aspects. Once the framework is in place, we compare twelve process modeling languages against it. We generalize the results of the comparison and identify clusters of similar languages with respect to data awareness.",
"Process model abstraction is an effective approach to reduce the complexity and increase the understandability of process models. Several techniques provide process model abstraction capabilities, but none of them includes data in the abstraction procedure. To overcome this gap, we propose data abstraction capabilities for process model abstraction. The approach is based on use cases found in literature as well as encountered in practice. Altogether, we introduce a framework for data abstraction in process models and provide algorithmic guidance to apply it in practice. The approach is evaluated by an implementation and a scenario of a workshop organization, that contains process models on different levels of abstraction.",
"Unternehmen erreichen ihre Geschaftsziele zunehmend durch das systematische Management ihrer Geschaftsprozesse. Um komplexe Geschaftsziele zu realisieren, lassen sich diese Prozesse meist verknupfen und so Prozessstrukturen aufbauen. Ein sehr komplexes Geschaftsziel ist beispielsweise die Entwicklung der Fahrzeugelektronik im Automobilbau. Hierbei mussen insbesondere die zahlreichen Abhangigkeiten zwischen elektronischen Systemen erfasst und in entsprechende Abhangigkeiten zwischen Entwicklungsprozessen umgesetzt werden. Das Ergebnis ist eine datengetriebene Prozessstruktur, die eine starke Beziehung zwischen der Struktur des Produkts und den auszufuhrenden Prozessen beschreibt. Sie enthalt hunderte bis tausende Prozesse mit entsprechenden Abhangigkeiten. Die Erstellung und Koordination einer datengetriebenen Prozessstruktur ist sehr aufwandig und kann manuell kaum bewerkstelligt werden. Die vorliegende Arbeit stellt mit COREPRO (Configuration Based Release Processes) eine durchgangige IT-Losung fur die Unterstutzung datengetriebener Prozessstrukturen vor. COREPRO erlaubt ihre formale Beschreibung und Ausfuhrung basierend auf einem intuitiven Basismodell. Wir fuhren eine Modellierungsunterstutzung ein, die die Modellierungsaufwande fur datengetriebene Prozessstrukturen signifikant reduziert. Ferner erlaubt COREPRO die Adaption datengetriebener Prozessstrukturen auf einer hohen Abstraktionsebene, indem Anderungen einer Produktstruktur direkt auf Adaptionen der zugehorigen Prozessstruktur transformiert werden. Geeignete Konsistenzanalysen stellen sicher, dass bei der Adaption zur Laufzeit mogliche Ausnahmesituationen erkannt werden. Diese lassen sich in COREPRO durch verschiedene innovative Mechanismen behandeln. Sie erlauben dem Nutzer nicht nur flexible Eingriffe in den Ablauf einer Prozessstruktur, sondern zeigen ihm auch die Konsequenzen derartiger Eingriffe an. Die korrekte, verklemmungsfreie Ausfuhrung der Prozessstruktur wird hierbei durchgehend garantiert.",
"In the engineering domain, the development of complex products (e.g., cars) necessitates the coordination of thousands of (sub-) processes. One of the biggest challenges for process management systems is to support the modeling, monitoring and maintenance of the many interdependencies between these sub-processes. The resulting process structures are large and can be characterized by a strong relationship with the assembly of the product; i.e., the sub-processes to be coordinated can be related to the different product components. So far, subprocess coordination has been mainly accomplished manually, resulting in high efforts and inconsistencies. IT support is required to utilize the information about the product and its structure for deriving, coordinating and maintaining such data-driven process structures. In this paper, we introduce the COREPRO framework for the data-driven modeling of large process structures. The approach reduces modeling efforts significantly and provides mechanisms for maintaining data-driven process structures.",
"The coordination of complex process structures is a fundamental task for enterprises, such as in the automotive industry. Usually, such process structures consist of several (sub-)processes whose execution must be coordinated and synchronized. Effecting this manually is both ineffective and error-prone. However, we can benefit from the fact that these processes are correlated with product structures in many application domains, such as product engineering. Specifically, we can utilize the assembly of a complex real object, such as a car consisting of different mechanical, electrical or electronic subcomponents. Each sub-component has related design or testing processes, which have to be executed within an overall process structure according to the product structure. Our goal is to enable product-driven (i.e., data-driven) process modeling, execution and adaptation. We show the necessity of considering the product life cycle and the role of processes, which are triggering state transitions within the product life cycle. This paper discusses important issues related to the design, enactment and change of data-driven process structures. Our considerations are based on several case studies we conducted for engineering processes in the automotive industry.",
"Enacting business processes in process engines requires the coverage of control flow, resource assignments, and process data. While the first two aspects are well supported in current process engines, data dependencies need to be added and maintained manually by a process engineer. Thus, this task is error-prone and time-consuming. In this paper, we address the problem of modeling processes with complex data dependencies, e.g., m:n relationships, and their automatic enactment from process models. First, we extend BPMN data objects with few annotations to allow data dependency handling as well as data instance differentiation. Second, we introduce a pattern-based approach to derive SQL queries from process models utilizing the above mentioned extensions. Therewith, we allow automatic enactment of data-aware BPMN process models. We implemented our approach for the Activiti process engine to show applicability."
]
}
|
1403.3829
|
2404781972
|
We present a novel compact image descriptor for large scale image search. Our proposed descriptor - Geometric VLAD (gVLAD) is an extension of VLAD (Vector of Locally Aggregated Descriptors) that incorporates weak geometry information into the VLAD framework. The proposed geometry cues are derived as a membership function over keypoint angles which contain evident and informative information but yet often discarded. A principled technique for learning the membership function by clustering angles is also presented. Further, to address the overhead of iterative codebook training over real-time datasets, a novel codebook adaptation strategy is outlined. Finally, we demonstrate the efficacy of proposed gVLAD based retrieval framework where we achieve more than 15 improvement in mAP over existing benchmarks.
|
The Bag-of-Words (BoW) representation is one of the most widely used method for image retrieval @cite_17 @cite_10 . It quantizes each local descriptor SIFT @cite_2 or SURF @cite_3 , to its nearest cluster center and encodes each image as a histogram over cluster centers also known as Visual Words''. Good retrieval performance is achieved with a high dimensional sparse BOW vector, in which case inverted lists can be used to implement efficient search. However, the search time grows quadratically as the number of images increase @cite_1 .
|
{
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"2150307973",
"1677409904",
"2124386111",
"2141362318",
"2131846894"
],
"abstract": [
"We propose a randomized data mining method that finds clusters of spatially overlapping images. The core of the method relies on the min-Hash algorithm for fast detection of pairs of images with spatial overlap, the so-called cluster seeds. The seeds are then used as visual queries to obtain clusters which are formed as transitive closures of sets of partially overlapping images that include the seed. We show that the probability of finding a seed for an image cluster rapidly increases with the size of the cluster. The properties and performance of the algorithm are demonstrated on data sets with 104, 105, and 5 × 106 images. The speed of the method depends on the size of the database and the number of clusters. The first stage of seed generation is close to linear for databases sizes up to approximately 234 ? 1010 images. On a single 2.4 GHz PC, the clustering process took only 24 minutes for a standard database of more than 100,000 images, i.e., only 0.014 seconds per image.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
"An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.",
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films."
]
}
|
1403.3829
|
2404781972
|
We present a novel compact image descriptor for large scale image search. Our proposed descriptor - Geometric VLAD (gVLAD) is an extension of VLAD (Vector of Locally Aggregated Descriptors) that incorporates weak geometry information into the VLAD framework. The proposed geometry cues are derived as a membership function over keypoint angles which contain evident and informative information but yet often discarded. A principled technique for learning the membership function by clustering angles is also presented. Further, to address the overhead of iterative codebook training over real-time datasets, a novel codebook adaptation strategy is outlined. Finally, we demonstrate the efficacy of proposed gVLAD based retrieval framework where we achieve more than 15 improvement in mAP over existing benchmarks.
|
However, most of existing methods ignore the geometric information present in images. Spatial re-ranking @cite_10 is usually used as a geometric filter to remove unrelated images from retrieval results. However, due to its expensive computation it is applied only to top ranked images for re-ranking. The spatial pyramid @cite_12 is a simple extension of the BOW representation which partitions the image into increasingly fine sub-regions and computes histograms of local features found inside each sub-region. It shows improved performance on scene classification tasks. The weak geometric consistency constraints (WGC) @cite_9 uses angle and scale information from key points to verify the consistency of matching descriptors. It can improve the retrieval performance significantly. Recently, Zhang . @cite_6 propose a technique to encode more spatial information through the geometry-preserving visual phrases (GVP) which requires a pair of images to obtain geometric information. Chum . @cite_8 propose geometric min-hash, which extends min-hash by adding local spatial extent to increase the discriminability of the descriptor. It can be used for nearly duplicate image detection but has not achieved the state-of-the art performance in retrieval.
|
{
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_10",
"@cite_12"
],
"mid": [
"2171448996",
"1556531089",
"1972378554",
"2141362318",
"2162915993"
],
"abstract": [
"We propose a novel hashing scheme for image retrieval, clustering and automatic object discovery. Unlike commonly used bag-of-words approaches, the spatial extent of image features is exploited in our method. The geometric information is used both to construct repeatable hash keys and to increase the discriminability of the description. Each hash key combines visual appearance (visual words) with semi-local geometric information. Compared with the state-of-the-art min-hash, the proposed method has both higher recall (probability of collision for hashes on the same object) and lower false positive rates (random collisions). The advantages of geometric min-hashing approach are most pronounced in the presence of viewpoint and scale change, significant occlusion or small physical overlap of the viewing fields. We demonstrate the power of the proposed method on small object discovery in a large unordered collection of images and on a large scale image clustering problem.",
"This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.",
"State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark.",
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora.",
"This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors."
]
}
|
1403.3807
|
162456623
|
Subjective Well-being(SWB), which refers to how people experience the quality of their lives, is of great use to public policy-makers as well as economic, sociological research, etc. Traditionally, the measurement of SWB relies on time-consuming and costly self-report questionnaires. Nowadays, people are motivated to share their experiences and feelings on social media, so we propose to sense SWB from the vast user generated data on social media. By utilizing 1785 users’ social media data with SWB labels, we train machine learning models that are able to “sense” individual SWB. Our model, which attains the state-of-the-art prediction accuracy, can then be applied to identify large amount of social media users’ SWB in time with low cost.
|
Watson, , developed positive and negative affective scale () @cite_15 and psychological well-being scale () @cite_10 , which are correspondent to emotional well-being and positive functioning respectively. The reliability and validity of PANAS and PWBS have been validated in long-term practices by numerous psychological studies. In this paper, we use these two scales for SWB assessment.
|
{
"cite_N": [
"@cite_15",
"@cite_10"
],
"mid": [
"2148905283",
"2163191809"
],
"abstract": [
"In recent studies of the structure of affect, positive and negative affect have consistently emerged as two dominant and relatively independent dimensions. A number of mood scales have been created to measure these factors; however, many existing measures are inadequate, showing low reliability or poor convergent or discriminant validity. To fill the need for reliable and valid Positive Affect and Negative Affect scales that are also brief and easy to administer, we developed two 10-item mood scales that comprise the Positive and Negative Affect Schedule (PANAS). The scales are shown to be highly internally consistent, largely uncorrelated, and stable at appropriate levels over a 2-month time period. Normative data and factorial and external evidence of convergent and discriminant validity for the scales are also presented.",
"A theoretical model of psychological well-being that encompasses 6 distinct dimensions of wellness (Autonomy, Environmental Mastery, Personal Growth, Positive Relations with Others, Purpose in Life, Self-Acceptance) was tested with data from a nationally representative sample of adults (N = 1,108), aged 25 and older, who participated in telephone interviews. Confirmatory factor analyses provided support for the proposed 6-factor model, with a single second-order super factor. The model was superior in fit over single-factor and other artifactual models. Age and sex differences on the various well-being dimensions replicated prior findings. Comparisons with other frequently used indicators (positive and negative affect, life satisfaction) demonstrated that the latter neglect key aspects of positive functioning emphasized in theories of health and well-being."
]
}
|
1403.3807
|
162456623
|
Subjective Well-being(SWB), which refers to how people experience the quality of their lives, is of great use to public policy-makers as well as economic, sociological research, etc. Traditionally, the measurement of SWB relies on time-consuming and costly self-report questionnaires. Nowadays, people are motivated to share their experiences and feelings on social media, so we propose to sense SWB from the vast user generated data on social media. By utilizing 1785 users’ social media data with SWB labels, we train machine learning models that are able to “sense” individual SWB. Our model, which attains the state-of-the-art prediction accuracy, can then be applied to identify large amount of social media users’ SWB in time with low cost.
|
Quite a lot of studies use LIWC (Linguistic Inquiry and Word Count) @cite_18 , fruit carefully constructed over two decades of human research, or other similar psychological language analysis tool, to quantify psychological expression on social media. Representative works, like hedonometer (happiness indicator) through Twitter by @cite_22 , twitter sentiment modeling and prediction of stock market by @cite_13 , identify the sentiment (moods, emotions) in real time. Furthermore, as face validation'', the quantified metric is highly correlated to social events or economic indicators, and it can even be predictable to economic trends. By modeling people's sentiment through statuses and posts on SNS, these works demonstrate that it is applicable to sense sentiment from social media.
|
{
"cite_N": [
"@cite_18",
"@cite_13",
"@cite_22"
],
"mid": [
"1979230891",
"2027860007",
"2099366530"
],
"abstract": [
"Two projects explored the links between language use and aging. In the first project, written or spoken text samples from disclosure studies from over 3,000 research participants from 45 different studies representing 21 laboratories in 3 countries were analyzed to determine how people change in their use of 14 text dimensions as a function of age. A separate project analyzed the collected works of 10 well-known novelists, playwrights, and poets who lived over the last 500 years. Both projects found that with increasing age, individuals use more positive and fewer negative affect words, use fewer self-references, use more future-tense and fewer past-tense verbs, arid demonstrate a general pattern of increasing cognitive complexity. Implications for using language as a marker of personality among current and historical texts are discussed.",
"Behavioral finance researchers can apply computational methods to large-scale social media data to better understand and predict markets.",
"Individual happiness is a fundamental societ al metric. Normally measured through self-report, happiness has often been indirectly characterized and overshadowed by more readily quantifiable economic indicators such as gross domestic product. Here, we examine expressions made on the online, global microblog and social networking service Twitter, uncovering and explaining temporal variations in happiness and information levels over timescales ranging from hours to years. Our data set comprises over 46 billion words contained in nearly 4.6 billion expressions posted over a 33 month span by over 63 million unique users. In measuring happiness, we construct a tunable, real-time, remote-sensing, and non-invasive, text-based hedonometer. In building our metric, made available with this paper, we conducted a survey to obtain happiness evaluations of over 10,000 individual words, representing a tenfold size improvement over similar existing word sets. Rather than being ad hoc, our word list is chosen solely by frequency of usage, and we show how a highly robust and tunable metric can be constructed and defended."
]
}
|
1403.3807
|
162456623
|
Subjective Well-being(SWB), which refers to how people experience the quality of their lives, is of great use to public policy-makers as well as economic, sociological research, etc. Traditionally, the measurement of SWB relies on time-consuming and costly self-report questionnaires. Nowadays, people are motivated to share their experiences and feelings on social media, so we propose to sense SWB from the vast user generated data on social media. By utilizing 1785 users’ social media data with SWB labels, we train machine learning models that are able to “sense” individual SWB. Our model, which attains the state-of-the-art prediction accuracy, can then be applied to identify large amount of social media users’ SWB in time with low cost.
|
Burke and collages explored the relationship between particular activities on SNS and feelings of social capital @cite_2 . They used Facebook Intensity Scale and UCLA loneliness scale to assess one's social capital feeling, which could be used as an evaluation of one's cognitive feeling towards getting along with others. However, on-line social well-being cannot cover the conception of SWB.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2111972720"
],
"abstract": [
"Previous research has shown a relationship between use of social networking sites and feelings of social capital. However, most studies have relied on self-reports by college students. The goals of the current study are to (1) validate the common self-report scale using empirical data from Facebook, (2) test whether previous findings generalize to older and international populations, and (3) delve into the specific activities linked to feelings of social capital and loneliness. In particular, we investigate the role of directed interaction between pairs---such as wall posts, comments, and \"likes\" --- and consumption of friends' content, including status updates, photos, and friends' conversations with other friends. We find that directed communication is associated with greater feelings of bonding social capital and lower loneliness, but has only a modest relationship with bridging social capital, which is primarily related to overall friend network size. Surprisingly, users who consume greater levels of content report reduced bridging and bonding social capital and increased loneliness. Implications for designs to support well-being are discussed."
]
}
|
1403.3807
|
162456623
|
Subjective Well-being(SWB), which refers to how people experience the quality of their lives, is of great use to public policy-makers as well as economic, sociological research, etc. Traditionally, the measurement of SWB relies on time-consuming and costly self-report questionnaires. Nowadays, people are motivated to share their experiences and feelings on social media, so we propose to sense SWB from the vast user generated data on social media. By utilizing 1785 users’ social media data with SWB labels, we train machine learning models that are able to “sense” individual SWB. Our model, which attains the state-of-the-art prediction accuracy, can then be applied to identify large amount of social media users’ SWB in time with low cost.
|
Kosinskia @cite_3 , Schwartz @ analyzed the correlation between users' personal traits and behaviors or language usage on Facebook. in which users' personality traits are measured by using Big-Five Personality Inventory. They also build models to predict users' traits like personality through social media, which is also another evidence of convergent validation'' approach. @cite_4 use 839 behavioral features on microblog to predict personality, which proved the feasibility of predicting psychological variables through behavioral features. Similar works @cite_12 are also conducted on social media like Twitter. Studies have also cast interests to the prediction of mental health status via social media. Hao @cite_1 , Choudhury @ generalize this method to prediction depression, anxiety, etc. analyzed users' both behavioral and linguistic features on microblog, and employ machine learning methods to predict depression, anxiety and other mental health status of individuals.
|
{
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_12",
"@cite_3"
],
"mid": [
"95939850",
"1971183512",
"2163561184",
"2153803020"
],
"abstract": [
"The rapid development of social media brings about vast user generated content. Computational cyber-psychology, an interdisciplinary subject area, employs machine learning approaches to explore underlying psychological patterns. Our research aims at identifying users’ mental health status through their social media behavior. We collected both users’ social media data and mental health data from the most popular Chinses microblog service provider, Sina Weibo. By extracting linguistic and behavior features, and applying machine learning algorithms, we made preliminary exploration to identify users’ mental health status automaticly, which previously is mainly measured by well-designed psychological questionnaire. Our classification model achieves the accuracy of 72 , and the continous predicting model achieved correlation of 0.3 with questionnaire based score.",
"Because of its richness and availability, micro-blogging has become an ideal platform for conducting psychological research. In this paper, we proposed to predict active users' personality traits through micro-blogging behaviors. 547 Chinese active users of micro-blogging participated in this study. Their personality traits were measured by the Big Five Inventory, and digital records of micro-blogging behaviors were collected via web crawlers. After extracting 845 micro-blogging behavioral features, we first trained classification models utilizing Support Vector Machine (SVM), differentiating participants with high and low scores on each dimension of the Big Five Inventory. The classification accuracy ranged from 84 to 92 . We also built regression models utilizing PaceRegression methods, predicting participants' scores on each dimension of the Big Five Inventory. The Pearson correlation coefficients between predicted scores and actual scores ranged from 0.48 to 0.54. Results indicated that active users' personality traits could be predicted by micro-blogging behaviors.",
"We study the relationship between Facebook popularity (number of contacts) and personality traits on a large number of subjects. We test to which extent two prevalent viewpoints hold. That is, popular users (those with many social contacts) are the ones whose personality traits either predict many offline (real world) friends or predict propensity to maintain superficial relationships. We find that the predictor for number of friends in the real world (Extraversion) is also a predictor for number of Facebook contacts. We then test whether people who have many social contacts on Facebook are the ones who are able to adapt themselves to new forms of communication, present themselves in likable ways, and have propensity to maintain superficial relationships. We show that there is no statistical evidence to support such a conjecture.",
"We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88 of cases, African Americans and Caucasian Americans in 95 of cases, and between Democrat and Republican in 85 of cases. For the personality trait “Openness,” prediction accuracy is close to the test–retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy."
]
}
|
1403.3807
|
162456623
|
Subjective Well-being(SWB), which refers to how people experience the quality of their lives, is of great use to public policy-makers as well as economic, sociological research, etc. Traditionally, the measurement of SWB relies on time-consuming and costly self-report questionnaires. Nowadays, people are motivated to share their experiences and feelings on social media, so we propose to sense SWB from the vast user generated data on social media. By utilizing 1785 users’ social media data with SWB labels, we train machine learning models that are able to “sense” individual SWB. Our model, which attains the state-of-the-art prediction accuracy, can then be applied to identify large amount of social media users’ SWB in time with low cost.
|
Most recent work of Schwartz, generalize their method to LS prediction @cite_21 . Their work used LS as a single indicator of SWB, and established model to predict the LS of each counties in the U.S.A through Twitter data. In their work, county is the unit to predict the LS, rather than individual. Their method, mainly analyze linguistic features on social media. Furthermore, their model introduced variables like median age'', median household income'' and educational attainment'', which can only be obtained via costly census.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"6274171"
],
"abstract": [
"The language used in tweets from 1,300 different US counties was found to be predictive of the subjective well-being of people living in those counties as measured by representative surveys. Topics, sets of co-occurring words derived from the tweets using LDA, improved accuracy in predicting life satisfaction over and above standard demographic and socio-economic controls (age, gender, ethnicity, income, and education). The LDA topics provide a greater behavioural and conceptual resolution into life satisfaction than the broad socio-economic and demographic variables. For example, tied in with the psychological literature, words relating to outdoor activities, spiritual meaning, exercise, and good jobs correlate with increased life satisfaction, while words signifying disengagement like ’bored’ and ’tired’ show a negative association."
]
}
|
1403.3710
|
1969281448
|
This article proposes a novel energy-efficient multimedia delivery system called EStreamer. First, we study the relationship between buffer size at the client, burst-shaped TCP-based multimedia traffic, and energy consumption of wireless network interfaces in smartphones. Based on the study, we design and implement EStreamer for constant bit rate and rate-adaptive streaming. EStreamer can improve battery lifetime by 3x, 1.5x, and 2x while streaming over Wi-Fi, 3G, and 4G, respectively.
|
Traffic shaping is widely used to save communication energy for wireless multimedia streaming @cite_14 . 9.wang proposed an adaptive streaming technique for the UDP-based multimedia streaming. The system works at the server or proxy and manipulates the burst interval depending on the packet loss and network situation experienced by the streaming client. On the other hand, our focus is HTTP over TCP-based streaming. We modeled the energy consumption of bursty TCP traffic. Then we developed EStreamer based on the models. EStreamer depends on standard TCP properties to apply traffic shaping, which greatly simplifies the implementation of energy aware streaming. It does not require either the support of any secondary protocol such as RTCP or the modification of TCP or any other protocols. Therefore, EStreamer can be easily integrated with modern TCP-based streaming services.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2045617788"
],
"abstract": [
"Energy conservation in battery powered mobile devices that perform wireless multimedia streaming has been a significant research problem since last decade. This is because these mobile devices consume a lot of power while receiving, decoding and ultimately, presenting the multimedia content. What makes things worse is the fact that battery technologies have not evolved enough to keep up with the rapid advancement of mobile devices. This survey examines solutions that have been proposed during the last few years, to improve the energy efficiency of wireless multimedia streaming in mobile hand-held devices. We categorize the research work according to different layers of the Internet protocol stack they utilize. Then, we again regroup these studies based on different traffic scheduling and multimedia content adaptation mechanisms. The traffic scheduling category contains those solutions that optimize the wireless receiving energy without changing the actual multimedia content. The second category on the other hand, specifically modifies the content, in order to reduce the energy consumed by the wireless receiver and to decode and view the content. We compare them and provide evidence of the fact that some of these tactics already exist in modern smaprtphones and provide energy savings with real measurements. In addition, we discuss some relevant literature on the complementary problem of energy-aware multimedia delivery from mobile devices and contrast with our target approaches for multimedia transmission to mobile devices."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.