aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1603.02776
2953029945
Without discourse connectives, classifying implicit discourse relations is a challenging task and a bottleneck for building a practical discourse parser. Previous research usually makes use of one kind of discourse framework such as PDTB or RST to improve the classification performance on discourse relations. Actually, under different discourse annotation frameworks, there exist multiple corpora which have internal connections. To exploit the combination of different discourse corpora, we design related discourse classification tasks specific to a corpus, and propose a novel Convolutional Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and shared representations for each task. The experimental results on the PDTB implicit discourse relation classification task demonstrate that our model achieves significant gains over baseline systems.
The supervised method often approaches discourse analysis as a classification problem of pairs of sentences arguments. The first work to tackle this task on PDTB were @cite_5 . They selected several surface features to train four binary classifiers, each for one of the top-level PDTB relation classes. Although other features proved to be useful, word pairs were the major contributor to most of these classifiers. Interestingly, they found that training these features on PDTB was more useful than training them on an external corpus. Extending from this work, lin2009recognizing (2009) further identified four different feature types representing the context, the constituent parse trees, the dependency parse trees and the raw text respectively. In addition, park2012improving (2012) promoted the performance through optimizing the feature set. Recently, mckeown2013aggregated (2013) tried to tackle the feature sparsity problem by aggregating features. rutherford-xue:2014:EACL (2014) used brown cluster to replace the word pair features, achieving the state-of-the-art classification performance. TACL536 used two recursive neural networks to represent the arguments and the entity spans and use the combination of the representations to predict the discourse relation.
{ "cite_N": [ "@cite_5" ], "mid": [ "2109462987" ], "abstract": [ "We present a series of experiments on automatically identifying the sense of implicit discourse relations, i.e. relations that are not marked with a discourse connective such as \"but\" or \"because\". We work with a corpus of implicit relations present in newspaper text and report results on a test set that is representative of the naturally occurring distribution of senses. We use several linguistically informed features, including polarity tags, Levin verb classes, length of verb phrases, modality, context, and lexical features. In addition, we revisit past approaches using lexical pairs from unannotated text as features, explain some of their shortcomings and propose modifications. Our best combination of features outperforms the baseline from data intensive approaches by 4 for comparison and 16 for contingency." ] }
1603.02776
2953029945
Without discourse connectives, classifying implicit discourse relations is a challenging task and a bottleneck for building a practical discourse parser. Previous research usually makes use of one kind of discourse framework such as PDTB or RST to improve the classification performance on discourse relations. Actually, under different discourse annotation frameworks, there exist multiple corpora which have internal connections. To exploit the combination of different discourse corpora, we design related discourse classification tasks specific to a corpus, and propose a novel Convolutional Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and shared representations for each task. The experimental results on the PDTB implicit discourse relation classification task demonstrate that our model achieves significant gains over baseline systems.
There also exist some semi-supervised approaches which exploit both labeled and unlabeled data for discourse relation classification. 39286423 (2010) proposed a semi-supervised method to exploit the co-occurrence of features in unlabeled data. They found this method was especially effective for improving accuracy for infrequent relation types. 39260331 presented a method to predict the missing connective based on a language model trained on an unannotated corpus. The predicted connective was then used as a feature to classify the implicit relation. An interesting work is done by @cite_10 , where they designed a multi-task learning method to improve the classification performance by leveraging both implicit and explicit discourse data.
{ "cite_N": [ "@cite_10" ], "mid": [ "2250776240" ], "abstract": [ "To overcome the shortage of labeled data for implicit discourse relation recognition, previous works attempted to automatically generate training data by removing explicit discourse connectives from sentences and then built models on these synthetic implicit examples. However, a previous study (Sporleder and Lascarides, 2008) showed that models trained on these synthetic data do not generalize very well to natural (i.e. genuine) implicit discourse data. In this work we revisit this issue and present a multi-task learning based system which can effectively use synthetic data for implicit discourse relation recognition. Results on PDTB data show that under the multi-task learning framework our models with the use of the prediction of explicit discourse connectives as auxiliary learning tasks, can achieve an averaged F1 improvement of 5.86 over baseline models." ] }
1603.02476
2953071126
We consider the problem of data collection from a continental-scale network of energy harvesting sensors, applied to tracking mobile assets in rural environments. Our application constraints favour a highly asymmetric solution, with heavily duty-cycled sensor nodes communicating with powered base stations. We study a novel scheduling optimisation problem for energy harvesting mobile sensor network, that maximises the amount of collected data under the constraints of radio link quality and energy harvesting efficiency, while ensuring a fair data reception. We show that the problem is NP-complete and propose a heuristic algorithm to approximate the optimal scheduling solution in polynomial time. Moreover, our algorithm is flexible in handling progressive energy harvesting events, such as with solar panels, or opportunistic and bursty events, such as with Wireless Power Transfer. We use empirical link quality data, solar energy, and WPT efficiency to evaluate the proposed algorithm in extensive simulations and compare its performance to state-of-the-art. We show that our algorithm achieves high data reception rates, under different fairness and node lifetime constraints.
Extensive studies have been conducted on link scheduling in cellular networks. In @cite_5 , the link quality is predicted by an application framework which tracks the direction of travel of mobile phones at the BS. They develop energy-aware scheduling algorithms for different application workloads such as syncing or streaming. Some scheduling optimisations which consider multicast @cite_40 , quality-of-service assurance @cite_39 and fair relaying with multiple antennas @cite_2 are proposed to achieve optimal delay, capacity gain or network utility. The majority of related work has focused on addressing the scheduling problem in the context of wireless networks @cite_35 @cite_8 @cite_11 @cite_46 . However, the notion of fairness in wireless networks focuses on fair allocation, such as channels, tasks among different queues, or time slots among the links in each super frame, which is different from the fairness in data collection of MSN.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_39", "@cite_40", "@cite_2", "@cite_5", "@cite_46", "@cite_11" ], "mid": [ "2158510188", "2143669931", "1981742769", "2098340878", "2098857789", "2139614748", "2102996315", "2142876645" ], "abstract": [ "We propose a unified static framework to study the interplay of user association and resource allocation in heterogeneous cellular networks. This framework allows us to compare the performance of three channel allocation strategies: Orthogonal deployment, Co-channel deployment, and Partially Shared deployment. We have formulated joint optimization problems that are non-convex integer programs, are NP-hard, and hence it is difficult to efficiently obtain exact solutions. We have, therefore, developed techniques to obtain upper bounds on the system's performance. We show that these upper bounds are tight by comparing them to feasible solutions. We have used these upper bounds as benchmarks to quantify how well different user association rules and resource allocation schemes perform. Our numerical results indicate that significant gains in throughput are achievable for heterogeneous networks if the right combination of user association and resource allocation is used. Noting the significant impact of the association rule on the performance, we propose a simple association rule that performs much better than all existing user association rules.", "This paper proposes FlashLinQ--a synchronous peer-to-peer wireless PHY MAC network architecture. FlashLinQ leverages the fine-grained parallel channel access offered by OFDM and incorporates an analog energy-level-based signaling scheme that enables signal-to-interference ratio (SIR)-based distributed scheduling. This new signaling mechanism, and the concomitant scheduling algorithm, enables efficient channel-aware spatial resource allocation, leading to significant gains over a CSMA CA system using RTS CTS. FlashLinQ is a complete system architecture including: 1) timing and frequency synchronization derived from cellular spectrum; 2) peer discovery; 3) link management; and 4) channel-aware distributed power, data rate, and link scheduling. FlashLinQ has been implemented for operation over licensed spectrum on a digital signal processor field-programmable gate array (DSP FPGA) platform. In this paper, we present FlashLinQ performance results derived from both measurements and simulations.", "We consider the problem of scheduling data in the downlink of a cellular network, over parallel time-varying channels, while providing quality of service (QoS) guarantees, to multiple users in the network. We design simple and efficient admission control, resource allocation, and scheduling algorithms for guaranteeing requested QoS. Our scheduling algorithms consists of two sets, namely, (what we call) joint KH 2) if the admission control allocates channel resources to the K&H scheduling only, due to loose delay requirements, then there is no need to use the RC scheduler. In designing the RC scheduler, we propose a reference channel approach and formulate the scheduler as a linear program, dispensing with complex dynamic programming approaches, by the use of a resource allocation scheme. An advantage of this formulation is that the desired QoS constraints can be explicitly enforced, by allotting sufficient channel resources to users, during call admission.", "Optimized opportunistic multicast scheduling (OMS) is studied for cellular networks, where the problem of efficiently transmitting a common set of fountain-encoded data from a single base station to multiple users over quasi-static fading channels is examined. The proposed OMS scheme better balances the tradeoff between multiuser diversity and multicast gain by transmitting to a subset of users in each time slot using the maximal data rate that ensures successful decoding by these users. We first analyze the system delay in homogeneous networks by capitalizing on extreme value theory and derive the optimal selection ratio (i.e., the portion of users that are selected in each time slot) that minimizes the delay. Then, we extend results to heterogeneous networks where users are subject to different channel statistics. By partitioning users into multiple approximately homogeneous rings, we turn a heterogeneous network into a composite of smaller homogeneous networks and derive the optimal selection ratio for the heterogeneous network. Computer simulations confirm theoretical results and illustrate that the proposed OMS can achieve significant performance gains in both homogeneous and heterogeneous networks as compared with the conventional unicast and broadcast scheduling.", "This paper examines the shared relay architecture for the wireless cellular network, where instead of deploying multiple separate relays within each cell sector, a single relay with multiple antennas is placed at the cell edge and is shared by multiple sectors. The advantage of shared relaying is that the joint processing of signals at the relay enables the mitigation of intercell interference. To maximize the benefit of shared relaying, the resource allocation and the scheduling of users among adjacent cell sectors need to be optimized jointly. Based on this motivation, this paper formulates a network utility maximization problem for the shared relay system that considers the practical wireless backhaul constraint of matching the relay-to-user rate demand with the base-station-to-relay rate supply using a set of pricing variables. In addition, zero-forcing beamforming is used at the shared relay to separate users spatially; multiple users are scheduled in the frequency domain to maximize frequency reuse. A heuristic but efficient scheduling and resource allocation algorithm is proposed accordingly. System-level simulations quantify the effectiveness of the proposed approach, and show that the incorporation of the shared relay can improve the overall network performance and in particular significantly increase the throughput of cell edge users as compared to separate relaying.", "Cellular radios consume more power and suffer reduced data rate when the signal is weak. According to our measurements, the communication energy per bit can be as much as 6x higher when the signal is weak than when it is strong. To realize energy savings, applications must preferentially communicate when the signal is strong, either by deferring non-urgent communication or by advancing anticipated communication to coincide with periods of strong signal. Allowing applications to perform such scheduling requires predicting signal strength, so that opportunities for energy-efficient communication can be anticipated. Furthermore, such prediction must be performed at little energy cost. In this paper, we make several contributions towards a practical system for energy-aware cellular data scheduling called Bartendr. First, we establish, via measurements, the relationship between signal strength and power consumption. Second, we show that location alone is not sufficient to predict signal strength and motivate the use of tracks to enable effective prediction. Finally, we develop energy-aware scheduling algorithms for different workloads - syncing and streaming - and evaluate these via simulation driven by traces obtained during actual drives, demonstrating energy savings of up to 60 . Our experiments have been performed on four cellular networks across two large metropolitan areas, one in India and the other in the U.S.", "Scheduling is a critical and challenging resource allocation mechanism for multihop wireless networks. It is well known that scheduling schemes that favor links with larger queue length can achieve high throughput performance. However, these queue-length-based schemes could potentially suffer from large (even infinite) packet delays due to the well-known last packet problem, whereby packets belonging to some flows may be excessively delayed due to lack of subsequent packet arrivals. Delay-based schemes have the potential to resolve this last packet problem by scheduling the link based on the delay the packet has encountered. However, characterizing throughput optimality of these delay-based schemes has largely been an open problem in multihop wireless networks (except in limited cases where the traffic is single-hop). In this paper, we investigate delay-based scheduling schemes for multihop traffic scenarios with fixed routes. We develop a scheduling scheme based on a new delay metric and show that the proposed scheme achieves optimal throughput performance. Furthermore, we conduct simulations to support our analytical results and show that the delay-based scheduler successfully removes excessive packet delays, while it achieves the same throughput region as the queue-length-based scheme.", "An important issue of supporting multi-user video streaming over wireless networks is how to optimize the systematic scheduling by intelligently utilizing the available network resources while, at the same time, to meet each video's Quality of Service (QoS) requirement. In this work, we study the problem of video streaming over multi-channel multi-radio multihop wireless networks, and develop fully distributed scheduling schemes with the goals of minimizing the video distortion and achieving certain fairness. We first construct a general distortion model according to the network?s transmission mechanism, as well as the rate distortion characteristics of the video. Then, we formulate the scheduling as a convex optimization problem, and propose a distributed solution by jointly considering channel assignment, rate allocation, and routing. Specifically, each stream strikes a balance between the selfish motivation of minimizing video distortion and the global performance of minimizing network congestions. Furthermore, we extend the proposed scheduling scheme by addressing the fairness problem. Unlike prior works that target at users' bandwidth or demand fairness, we propose a media-aware distortion-fairness strategy which is aware of the characteristics of video frames and ensures max-min distortion-fairness sharing among multiple video streams. We provide extensive simulation results which demonstrate the effectiveness of our proposed schemes." ] }
1603.02476
2953071126
We consider the problem of data collection from a continental-scale network of energy harvesting sensors, applied to tracking mobile assets in rural environments. Our application constraints favour a highly asymmetric solution, with heavily duty-cycled sensor nodes communicating with powered base stations. We study a novel scheduling optimisation problem for energy harvesting mobile sensor network, that maximises the amount of collected data under the constraints of radio link quality and energy harvesting efficiency, while ensuring a fair data reception. We show that the problem is NP-complete and propose a heuristic algorithm to approximate the optimal scheduling solution in polynomial time. Moreover, our algorithm is flexible in handling progressive energy harvesting events, such as with solar panels, or opportunistic and bursty events, such as with Wireless Power Transfer. We use empirical link quality data, solar energy, and WPT efficiency to evaluate the proposed algorithm in extensive simulations and compare its performance to state-of-the-art. We show that our algorithm achieves high data reception rates, under different fairness and node lifetime constraints.
A link scheduling for maximum throughput-utility in single-hop networks with the constraint of network delay is presented in @cite_38 . It establishes a delay-based policy for utility optimisation. The policy provides deterministic worst-case delay bounds with total throughput-utility guarantee. The author in @cite_42 proposes an opportunistic scheduling algorithm that guarantees a bounded worst case delay in single-hop wireless networks. However, those scheduling algorithms are not applicable in MSNs, because they do not consider the constraints of energy and fairness of collection. In @cite_4 , a sensing scheduling among sensor nodes is presented to maximise the overall Quality of Monitoring utility subject to the energy usage. The scheduling algorithm maximises the overall utility which is to evaluate quality of sensor readings based on the greedy algorithm. For body sensor networks, Sidharth, focus on polling-based communication protocols, and address the problem of optimising the polling schedule to achieve minimal energy consumption and latency @cite_9 . They formulate the problem as a geometric program and solve it by convex optimisation.
{ "cite_N": [ "@cite_38", "@cite_9", "@cite_42", "@cite_4" ], "mid": [ "2083715455", "2093456343", "2115537518", "2111815723" ], "abstract": [ "It is well known that max-weight policies based on a queue backlog index can be used to stabilize stochastic networks, and that similar stability results hold if a delay index is used. Using Lyapunov Optimization, we extend this analysis to design a utility maximizing algorithm that uses explicit delay information from the head-of-line packet at each user. The resulting policy is shown to ensure deterministic worst-case delay guarantees, and to yield a throughput-utility that differs from the optimally fair value by an amount that is inversely proportional to the delay guarantee. Our results hold for a general class of 1-hop networks, including packet switches and multi-user wireless systems with time varying reliability.", "Body Sensor Networks (BSNs) consist of miniature sensors deployed on or implanted into the human body for health monitoring. Conserving the energy of these sensors, while guaranteeing a required level of performance, is a key challenge in BSNs. In terms of communication protocols, this translates to minimizing energy consumption while limiting the latency in data transfer. In this paper, we focus on polling-based communication protocols for BSNs, and address the problem of optimizing the polling schedule to achieve minimal energy consumption and latency. We show that this problem can be posed as a geometric program, which belongs to the class of convex optimization problems, solvable in polynomial time. We also introduce a dynamic priority vector for each sensor, based on the observation that relative priorities of sensors in a BSN change over time. This vector is used to develop a decision-tree based approach for resolving scheduling conflicts among devices. The proposed framework is applicable to a broad class of periodic polling-based communication protocols. We design one such protocol in detail and show that it achieves an improvement of approximately 45 over the widely accepted standard IEEE 802.15.4 MAC protocol.", "We first consider a multi-user, single-hop wireless network with arbitrarily varying (and possibly non-ergodic) arrivals and channels. We design an opportunistic scheduling algorithm that guarantees all sessions have a bounded worst case delay. The algorithm has no knowledge of the future, but yields throughput-utility that is close to (or better than) that of a T-slot lookahead policy that makes “ideal” decisions based on perfect knowledge up to T slots into the future. We then extend the algorithm to treat worst case delay guarantees in multi-hop networks. Our analysis uses a sample-path version of Lyapunov optimization together with a novel virtual queue structure.", "Wireless Sensor Networks (WSN) are often densely deployed in the region of interest in order to continuously monitor physical phenomenon. Due to highly deployment density and the nature of the physical phenomenon, nearby sensor readings are often highly correlated in both space domain and time domain. These spatial and temporal correlations bring significant potential advantages as well as challenges for developing efficient sensing scheduling protocols for WSN. In this paper, a theoretical framework is developed to model the Quality of Monitoring (QoM) by exploiting both spatial and temporal correlations. The objective of this work is to enable the development of efficient sensing scheduling protocols which exploit these advantageous intrinsic features of the WSN paradigm. Specially, we propose two sensing scheduling schemes in order to maximize the overall QoM subject to resource constraints (e.g., under fixed duty cycle). Extensive experiments validate our theoretical results." ] }
1603.02457
2295741145
In this paper, we define the reoptimization variant of the closest substring problem (CSP) under sequence addition. We show that, even with the additional information we have about the problem instance, the problem of finding a closest substring is still NP-hard. We investigate the combinatorial property of optimization problems called self-reducibility. We show that problems that are polynomial-time reducible to self-reducible problems also exhibits the same property. We illustrate this in the context of CSP. We used the property to show that although we cannot improve the approximability of the problem, we can improve the running time of the existing PTAS for CSP.
Due to the hardness results presented in @cite_12 , several other efforts were made to identify for which types of input instances the problem becomes easier to solve. A result in @cite_7 shows that even if the set of input instances is defined over the binary alphabet, we still cannot obtain a practical polynomial-time algorithm for small error bounds. Aside from characterizing input instances, one line of research focused on the of the problem @cite_18 . Based on these studies, it is shown that the problem is , i.e., fixing a parameter such as the pattern length or alphabet size will not make the problem easier to solve.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_12" ], "mid": [ "", "2148893137", "2115667895" ], "abstract": [ "", "Many algorithms for motif finding that are commonly used in bioinformatics start by sampling r potential motif occurrences from n input sequences. The motif is derived from these samples and evaluated on all sequences. This approach works extremely well in practice, and is implemented by several programs. Li, Ma and Wang have shown that a simple algorithm of this sort is a polynomial-time approximation scheme. However, in 2005, we showed specific instances of the motif finding problem for which the approximation ratio of a slight variation of this scheme converges to one very slowly as a function of the sample size r, which seemingly contradicts the high performance of sample-based algorithms. Here, we account for the difference by showing that, for a variety of different definitions of “strong” binary motifs, the approximation ratio of sample-based algorithms converges to one exponentially fast in r. We also describe “very strong” motifs, for which the simple sample-based approach always identifies the correct motif, even for modest values of r.", "The longest common subsequence problem is examined from the point of view of parameterized computational complexity. There are several different ways in which parameters enter the problem, such as the number of sequences to be analyzed, the length of the common subsequence, and the size of the alphabet. Lower bounds on the complexity of this basic problem imply lower bounds on a number of other sequence alignment and consensus problems. An issue in the theory of parameterized complexity is whether a problem which takes input (x, k) can be solved in time ƒ(k) · nα where α is independent of k (termed fixed-parameter tractability). It can be argued that this is the appropriate asymptotic model of feasible computability for problems for which a small range of parameter values covers important applications — a situation which certainly holds for many problems in biological sequence analysis. Our main results show that: 1. (1) The longest common subsequence (LCS) parameterized by the number of sequences to be analyzed is hard for W[t] for all t. 2. (2) The LCS problem, parameterized by the length of the common subsequence, belongs to W[P] and is hard for W[2]. 3. (3) The LCS problem parameterized both by the number of sequences and the length of the common subsequence, is complete for W[1]. All of the above results are obtained for unrestricted alphabet sizes. For alphabets of a fixed size, problems (2) and (3) are fixed-parameter tractable. We conjecture that (1) remains hard." ] }
1603.02063
2294769644
Efficient processing of aggregated range queries on two-dimensional grids is a common requirement in information retrieval and data mining systems, for example in Geographic Information Systems and OLAP cubes. We introduce a technique to represent grids supporting aggregated range queries that requires little space when the data points in the grid are clustered, which is common in practice. We show how this general technique can be used to support two important types of aggregated queries, which are ranked range queries and counting range queries. Our experimental evaluation shows that this technique can speed up aggregated queries up to more than an order of magnitude, with a small space overhead. HighlightsSpace-efficient representation for two-dimensional grids.Efficient support for aggregated range queries.Proved performance in main memory.Results competitive with the state of the art.Applications to several domains: Geographic Information Systems, OLAP cubes, etc.
@cite_12 introduce compact data structures for various queries on two-dimensional weighted points, including range top- @math queries and range counting queries. Their solutions are based on wavelet trees. For range top- @math queries, the bitmap of each node of the wavelet tree is enhanced as follows: Let @math be the points represented at a node, and @math be the weight of point @math . Then, a RMQ data structure built on @math is stored together with the bitmap. Such a structure uses @math bits and finds the maximum weight in any range @math in constant time @cite_30 and without accessing the weights themselves. Therefore, the total space becomes @math bits.
{ "cite_N": [ "@cite_30", "@cite_12" ], "mid": [ "2007791040", "2080990114" ], "abstract": [ "Given a static array of @math totally ordered objects, the range minimum query problem is to build a data structure that allows us to answer efficiently subsequent on-line queries of the form “what is the position of a minimum element in the subarray ranging from @math to @math ?”. We focus on two settings, where (1) the input array is available at query time, and (2) the input array is available only at construction time. In setting (1), we show new data structures (a) of size @math bits and query time @math for any positive integer function @math for an arbitrary constant @math , or (b) with @math bits and @math query time, where @math denotes the empirical entropy of @math th order of the input array. In setting (2), we give a data structure of size @math bits and query time @math . All data structures can be constructed in linear time and almost in-place.", "We consider various data-analysis queries on two-dimensional points. We give new space time tradeoffs over previous work on geometric queries such as dominance and rectangle visibility, and on semigroup and group queries such as sum, average, variance, minimum and maximum. We also introduce new solutions to queries less frequently considered in the literature such as two-dimensional quantiles, majorities, successor predecessor, mode, and various top-k queries, considering static and dynamic scenarios." ] }
1603.02063
2294769644
Efficient processing of aggregated range queries on two-dimensional grids is a common requirement in information retrieval and data mining systems, for example in Geographic Information Systems and OLAP cubes. We introduce a technique to represent grids supporting aggregated range queries that requires little space when the data points in the grid are clustered, which is common in practice. We show how this general technique can be used to support two important types of aggregated queries, which are ranked range queries and counting range queries. Our experimental evaluation shows that this technique can speed up aggregated queries up to more than an order of magnitude, with a small space overhead. HighlightsSpace-efficient representation for two-dimensional grids.Efficient support for aggregated range queries.Proved performance in main memory.Results competitive with the state of the art.Applications to several domains: Geographic Information Systems, OLAP cubes, etc.
To solve top- @math queries on a grid range @math , we first traverse the wavelet tree to identify the @math bitmap intervals where the points in @math lie. The heaviest point in @math in each bitmap interval is obtained with an RMQ, but we need to obtain the actual priorities in order to find the heaviest among the @math candidates. The priorities are stored sorted by @math - or @math -coordinate, so we obtain each one in @math time by tracking the point with maximum weight in each interval. Thus a top-1 query is solved in @math time. For a top- @math query, we must maintain a priority queue of the candidate intervals and, each time the next heaviest element is found, we remove it from its interval and reinsert in the queue the two resulting subintervals. The total query time is @math . It is possible to reduce the time to @math time and @math bits, for any constant @math @cite_24 , but the space usage is much higher, even if linear.
{ "cite_N": [ "@cite_24" ], "mid": [ "92500321" ], "abstract": [ "We describe a data structure that uses O(n)-word space and reports k most relevant documents that contain a query pattern P in optimal O(|P| + k) time. Our construction supports an ample set of important relevance measures, such as the frequency of P in a document and the minimal distance between two occurrences of P in a document. We show how to reduce the space of the data structure from O(n log n) to O(n (log σ + log D + log log n)) bits, where σ is the alphabet size and D is the total number of documents." ] }
1603.02063
2294769644
Efficient processing of aggregated range queries on two-dimensional grids is a common requirement in information retrieval and data mining systems, for example in Geographic Information Systems and OLAP cubes. We introduce a technique to represent grids supporting aggregated range queries that requires little space when the data points in the grid are clustered, which is common in practice. We show how this general technique can be used to support two important types of aggregated queries, which are ranked range queries and counting range queries. Our experimental evaluation shows that this technique can speed up aggregated queries up to more than an order of magnitude, with a small space overhead. HighlightsSpace-efficient representation for two-dimensional grids.Efficient support for aggregated range queries.Proved performance in main memory.Results competitive with the state of the art.Applications to several domains: Geographic Information Systems, OLAP cubes, etc.
A better result, using multi-ary wavelet trees, was introduced by @cite_15 . They match the optimal @math time using just @math bits on an @math grid. @cite_23 extended the results to @math grids. This query time is optimal within space @math @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_23" ], "mid": [ "2065240689", "1495731517", "2116258248" ], "abstract": [ "Proving lower bounds for range queries has been an active topic of research since the late 70s, but so far nearly all results have been limited to the (rather restrictive) semigroup model. We consider one of the most basic range problem, orthogonal range counting in two dimensions, and show almost optimal bounds in the group model and the (holy grail) cell-probe model. Specifically, we show the following bounds, which were known in the semigroup model, but are major improvements in the more general models:* In the group and cell-probe models, a static data structure of size n lgO(1) n requires Omega(lg n lglg n) time per query. This is an exponential improvement over previous bounds, and matches known upper bounds.* In the group model, a dynamic data structure takes time Omega((lg n lglg n)2) per operation. This is close to the O(lg2 n) upper bound, where as the previous lower bound was Omega(lg n). Proving such (static and dynamic) bounds in the group model has been regarded as an important challenge at least since [Fredman, JACM 1982] and [Chazelle, FOCS 1986].", "We present a succinct representation of a set of n points on an n ×n grid using @math bits to support orthogonal range counting in @math time, and range reporting in @math time, where k is the size of the output. This achieves an improvement on query time by a factor of @math upon the previous result of Makinen and Navarro [1], while using essentially the information-theoretic minimum space. Our data structure not only can be used as a key component in solutions to the general orthogonal range search problem to save storage cost, but also has applications in text indexing. In particular, we apply it to improve two previous space-efficient text indexes that support substring search [2] and position-restricted substring search [1]. We also use it to extend previous results on succinct representations of sequences of small integers, and to design succinct data structures supporting certain types of orthogonal range query in the plane.", "Binary relations are an important abstraction arising in many data representation problems. The data structures proposed so far to represent them support just a few basic operations required to fit one particular application. We identify many of those operations arising in applications and generalize them into a wide set of desirable queries for a binary relation representation. We also identify reductions among those operations. We then introduce several novel binary relation representations, some simple and some quite sophisticated, that not only are space-efficient but also efficiently support a large subset of the desired queries." ] }
1603.01987
2951086924
Automatic quality evaluation of Web information is a task with many fields of applications and of great relevance, especially in critical domains like the medical one. We move from the intuition that the quality of content of medical Web documents is affected by features related with the specific domain. First, the usage of a specific vocabulary (Domain Informativeness); then, the adoption of specific codes (like those used in the infoboxes of Wikipedia articles) and the type of document (e.g., historical and technical ones). In this paper, we propose to leverage specific domain features to improve the results of the evaluation of Wikipedia medical articles. In particular, we evaluate the articles adopting an "actionable" model, whose features are related to the content of the articles, so that the model can also directly suggest strategies for improving a given article quality. We rely on Natural Language Processing (NLP) and dictionaries-based techniques in order to extract the bio-medical concepts in a text. We prove the effectiveness of our approach by classifying the medical articles of the Wikipedia Medicine Portal, which have been previously manually labeled by the Wiki Project team. The results of our experiments confirm that, by considering domain-oriented features, it is possible to obtain sensible improvements with respect to existing solutions, mainly for those articles that other approaches have less correctly classified. Other than being interesting by their own, the results call for further research in the area of domain specific features suitable for Web data quality assessment.
Automatic quality evaluation of Wikipedia articles has been addressed in previous works with both unsupervised and supervised learning approaches. The common idea of most of the existing work is to identify a feature set, having as a starting point the Wikipedia project guidelines, to be exploited with the objective in mind to distinguish Featured Articles. @cite_13 , Stvilia identify a relevant set of features, including lingual, structural, historical and reputational aspects of each article. They show the effectiveness of their metrics by applying both clustering and classification. As a result, more than 90 identified.
{ "cite_N": [ "@cite_13" ], "mid": [ "9825390" ], "abstract": [ "Effective information quality analysis needs powerful yet easy ways to obtain metrics. The English version of Wikipedia provides an extremely interesting yet challenging case for the study of Information Quality dynamics at both macro and micro levels. We propose seven IQ metrics which can be evaluated automatically and test the set on a representative sample of Wikipedia content. The methodology of the metrics construction and the results of tests, along with a number of statistical characterizations of Wikipedia articles, their content construction, process metadata and social context are reported." ] }
1603.01987
2951086924
Automatic quality evaluation of Web information is a task with many fields of applications and of great relevance, especially in critical domains like the medical one. We move from the intuition that the quality of content of medical Web documents is affected by features related with the specific domain. First, the usage of a specific vocabulary (Domain Informativeness); then, the adoption of specific codes (like those used in the infoboxes of Wikipedia articles) and the type of document (e.g., historical and technical ones). In this paper, we propose to leverage specific domain features to improve the results of the evaluation of Wikipedia medical articles. In particular, we evaluate the articles adopting an "actionable" model, whose features are related to the content of the articles, so that the model can also directly suggest strategies for improving a given article quality. We rely on Natural Language Processing (NLP) and dictionaries-based techniques in order to extract the bio-medical concepts in a text. We prove the effectiveness of our approach by classifying the medical articles of the Wikipedia Medicine Portal, which have been previously manually labeled by the Wiki Project team. The results of our experiments confirm that, by considering domain-oriented features, it is possible to obtain sensible improvements with respect to existing solutions, mainly for those articles that other approaches have less correctly classified. Other than being interesting by their own, the results call for further research in the area of domain specific features suitable for Web data quality assessment.
Blumenstock @cite_27 inspects the relevance of the feature at each quality stage, showing that it can play a very important role in the quality assessment of Wikipedia articles. Only using this feature, the author achieves a F-measure of 0.902 in the task of classifying featured articles and 0.983 in the task of classifying non featured articles. The best results of the investigation are achieved with a classifier based on a neural network implemented with a multi-layer perceptron.
{ "cite_N": [ "@cite_27" ], "mid": [ "2104290389" ], "abstract": [ "Wikipedia, \"the free encyclopedia\", now contains over two million English articles, and is widely regarded as a high-quality, authoritative encyclopedia. Some Wikipedia articles, however, are of questionable quality, and it is not always apparent to the visitor which articles are good and which are bad. We propose a simple metric -- word count -- for measuring article quality. In spite of its striking simplicity, we show that this metric significantly outperforms the more complex methods described in related work." ] }
1603.01987
2951086924
Automatic quality evaluation of Web information is a task with many fields of applications and of great relevance, especially in critical domains like the medical one. We move from the intuition that the quality of content of medical Web documents is affected by features related with the specific domain. First, the usage of a specific vocabulary (Domain Informativeness); then, the adoption of specific codes (like those used in the infoboxes of Wikipedia articles) and the type of document (e.g., historical and technical ones). In this paper, we propose to leverage specific domain features to improve the results of the evaluation of Wikipedia medical articles. In particular, we evaluate the articles adopting an "actionable" model, whose features are related to the content of the articles, so that the model can also directly suggest strategies for improving a given article quality. We rely on Natural Language Processing (NLP) and dictionaries-based techniques in order to extract the bio-medical concepts in a text. We prove the effectiveness of our approach by classifying the medical articles of the Wikipedia Medicine Portal, which have been previously manually labeled by the Wiki Project team. The results of our experiments confirm that, by considering domain-oriented features, it is possible to obtain sensible improvements with respect to existing solutions, mainly for those articles that other approaches have less correctly classified. Other than being interesting by their own, the results call for further research in the area of domain specific features suitable for Web data quality assessment.
@cite_7 , the authors deal with the problem of discriminating between two large classes, namely (including in GoodEnough both GA and FA), in order to identify which articles need further revisions for being featured. They also introduce new composite features, those that we have referred to as an actionable model'' in . They obtain good classification results, with a F-measure of 0.876 in their best configuration. They also try classification for all the seven quality classes, as done in this work, using a random forest classifier with 100 trees, with a reduced set of features. The poor results (an average F-measure of 0.425) highlights the hardness of this fine-grained classification. In this paper, we address this last task in a novel way, by introducing domain features, specially dealing with the medical domain. The results of the investigation are promising.
{ "cite_N": [ "@cite_7" ], "mid": [ "2065167558" ], "abstract": [ "In this paper we address the problem of developing actionable quality models for Wikipedia, models whose features directly suggest strategies for improving the quality of a given article. We first survey the literature in order to understand the notion of article quality in the context of Wikipedia and existing approaches to automatically assess article quality. We then develop classification models with varying combinations of more or less actionable features, and find that a model that only contains clearly actionable features delivers solid performance. Lastly we discuss the implications of these results in terms of how they can help improve the quality of articles across Wikipedia." ] }
1603.02297
2295543477
We present TTC, an open-source parallel compiler for multidimensional tensor transpositions. In order to generate high-performance C++ code, TTC explores a number of optimizations, including software prefetching, blocking, loop-reordering, and explicit vectorization. To evaluate the performance of multidimensional transpositions across a range of possible use-cases, we also release a benchmark covering arbitrary transpositions of up to six dimensions. Performance results show that the routines generated by TTC achieve close to peak memory bandwidth on both the Intel Haswell and the AMD Steamroller architectures, and yield significant performance gains over modern compilers. By implementing a set of pruning heuristics, TTC allows users to limit the number of potential solutions; this option is especially useful when dealing with high-dimensional tensors, as the search space might become prohibitively large. Experiments indicate that when only 100 potential solutions are considered, the resulting performance is about 99 of that achieved with exhaustive search.
@cite_14 realized that search is necessary for high-performance 2D transpositions as early as 1995. Their code-generator explored the optimization space in an exhaustive fashion.
{ "cite_N": [ "@cite_14" ], "mid": [ "2017345827" ], "abstract": [ "Computationally intensive algorithms must usually be restructured to make the best use of cache memory in current high-performance, hierarchical memory computers. Unfortunately, cache conscious algorithms are sensitive to object sizes and addresses as well as the details of the cache and translation lookaside buffer geometries, and this sensitivity makes both automatic restructuring and hand-turning difficult tasks. An optimization approach is presented in this paper that automatically generates and executes a benchmark program from a concise specification of the algorithm's structure. This technique provides the performance data needed for verification of code generation heuristics or search among the various restructuring options. Matrix transpose and matrix multiplication are examined using this approach for several workstations with restructuring options of loop order, tiling (blocking), and unrolling." ] }
1603.02297
2295543477
We present TTC, an open-source parallel compiler for multidimensional tensor transpositions. In order to generate high-performance C++ code, TTC explores a number of optimizations, including software prefetching, blocking, loop-reordering, and explicit vectorization. To evaluate the performance of multidimensional transpositions across a range of possible use-cases, we also release a benchmark covering arbitrary transpositions of up to six dimensions. Performance results show that the routines generated by TTC achieve close to peak memory bandwidth on both the Intel Haswell and the AMD Steamroller architectures, and yield significant performance gains over modern compilers. By implementing a set of pruning heuristics, TTC allows users to limit the number of potential solutions; this option is especially useful when dealing with high-dimensional tensors, as the search space might become prohibitively large. Experiments indicate that when only 100 potential solutions are considered, the resulting performance is about 99 of that achieved with exhaustive search.
@cite_15 developed a cache model for IBM's Power7 processor. Their optimizations include blocking, prefetching and data alignment to avoid conflict-misses. They also illustrate the effect of large TLB The translation lookaside buffer, or TLB, serves as a cache for the expensive virtual-to-physical memory address translation required to convert software memory addresses to hardware memory addresses. page sizes on performance.
{ "cite_N": [ "@cite_15" ], "mid": [ "2176391467" ], "abstract": [ "We consider the problem of efficiently computing matrix transposes on the POWER7 architecture. We develop a matrix transpose algorithm that uses cache blocking, cache prefetching and data alignment. We model the POWER7 data cache and memory concurrency and use the model to predict the memory throughput of the proposed matrix transpose algorithm. The performance of our matrix transpose algorithm is up to five times higher than that of the dgetmo routine of the Engineering and Scientific Subroutine Library and is 2.5 times higher than that of the code generated by compiler-inserted prefetching. Numerical experiments indicate a good agreement between the predicted and the measured memory throughput." ] }
1603.02297
2295543477
We present TTC, an open-source parallel compiler for multidimensional tensor transpositions. In order to generate high-performance C++ code, TTC explores a number of optimizations, including software prefetching, blocking, loop-reordering, and explicit vectorization. To evaluate the performance of multidimensional transpositions across a range of possible use-cases, we also release a benchmark covering arbitrary transpositions of up to six dimensions. Performance results show that the routines generated by TTC achieve close to peak memory bandwidth on both the Intel Haswell and the AMD Steamroller architectures, and yield significant performance gains over modern compilers. By implementing a set of pruning heuristics, TTC allows users to limit the number of potential solutions; this option is especially useful when dealing with high-dimensional tensors, as the search space might become prohibitively large. Experiments indicate that when only 100 potential solutions are considered, the resulting performance is about 99 of that achieved with exhaustive search.
@cite_10 developed a code-generator for 2D transpositions using both an analytical model and search. They carried out an extensive work covering vectorization, blocking for both L1 cache and TLB, while parallelization was not explored.
{ "cite_N": [ "@cite_10" ], "mid": [ "2038945443" ], "abstract": [ "Matrix transposition is an important kernel used in many applications. Even though its optimization has been the subject of many studies, an optimization procedure that targets the characteristics of current processor architectures has not been developed. In this paper, we develop an integrated optimization framework that addresses a number of issues, including tiling for the memory hierarchy, effective handling of memory misalignment, utilizing memory subsystem characteristics, and the exploitation of the parallelism provided by the vector instruction sets in current processors. A judicious combination of analytical and empirical approaches is used to determine the most appropriate optimizations. The absence of problem information until execution time is handled by generating multiple versions of the code - the best version is chosen at runtime, with assistance from minimal-overhead inspectors. The approach highlights aspects of empirical optimization that are important for similar computations with little temporal reuse. Experimental results on PowerPC G5 and Intel Pentium 4 demonstrate the effectiveness of the developed framework." ] }
1603.01973
2294464016
Demographics, in particular, gender, age, and race, are a key predictor of human behavior. Despite the significant effect that demographics plays, most scientific studies using online social media do not consider this factor, mainly due to the lack of such information. In this work, we use state-of-the-art face analysis software to infer gender, age, and race from profile images of 350K Twitter users from New York. For the period from November 1, 2014 to October 31, 2015, we study which hashtags are used by different demographic groups. Though we find considerable overlap for the most popular hashtags, there are also many group-specific hashtags.
Hashtags allow users to self-categorize their messages and to join a virtual conversation on a given topic. Users can search for tweets with a particular hashtag to learn about recent events on a topic of their choice. Hashtags are also frequently used in scientific studies as they are easier to obtain and handle than, say, LDA topics. A recent study on classifying hashtags and inferring semantic similarity can be found in @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "2280557596" ], "abstract": [ "Hashtags, originally introduced in Twitter, are now becoming the most used way to tag short messages in social networks since this facilitates subsequent search, classification and clustering over those messages. However, extracting information from hashtags is difficult because their composition is not constrained by any (linguistic) rule and they usually appear in short and poorly written messages which are difficult to analyze with classic IR techniques. In this paper we address two challenging problems regarding the meaning of hashtags — namely, hashtag relatedness and hashtag classification - and we provide two main contributions. First we build a novel graph upon hashtags and (Wikipedia) entities drawn from the tweets by means of topic annotators (such as TagME); this graph will allow us to model in an efficacious way not only classic co-occurrences but also semantic relatedness among hashtags and entities, or between entities themselves. Based on this graph, we design algorithms that significantly improve state-of-the-art results upon known publicly available datasets. The second contribution is the construction and the public release to the research community of two new datasets: the former is a new dataset for hashtag relatedness, the latter is a dataset for hashtag classification that is up to two orders of magnitude larger than the existing ones. These datasets will be used to show the robustness and efficacy of our approaches, showing improvements in F1 up to two-digits in percentage (absolute)." ] }
1603.02175
2301657143
In this paper, we mine and learn to predict how similar a pair of users' interests towards videos are, based on demographic (age, gender and location) and social (friendship, interaction and group membership) information of these users. We use the video access patterns of active users as ground truth (a form of benchmark). We adopt tag-based user profiling to establish this ground truth, and justify why it is used instead of video-based methods, or many latent topic models such as LDA and Collaborative Filtering approaches. We then show the effectiveness of the different demographic and social features, and their combinations and derivatives, in predicting user interest similarity, based on different machine-learning methods for combining multiple features. We propose a hybrid tree-encoded linear model for combining the features, and show that it out-performs other linear and treebased models. Our methods can be used to predict user interest similarity when the ground-truth is not available, e.g. for new users, or inactive users whose interests may have changed from old access data, and is useful for video recommendation. Our study is based on a rich dataset from Tencent, a popular service provider of social networks, video services, and various other services in China.
With the popularity of OSNs, a better understanding of how much two individuals are alike in their interests, namely, interest similarity, will benefit various applications in OSNs. For example, information about interest similarity can be leveraged to improve friend recommendation based on the observation that the like-minded users are more likely to become friends @cite_1 . Moreover, targeted online advertising can also benefit from this because we could largely expand the pool of potential clients by finding users who are similar to the existing clients @cite_11 . In this paper, we apply the inferred interest similarity to video recommendation where the collaborative recommender systems identify users who are similar to a target user in order to recommend items to her.
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "1969667322", "1909015" ], "abstract": [ "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends—except for tastes in classical jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.", "When you amplify your own content, you are able to activate fans that may have joined the page a long time ago, but haven’t heard from you since they originally liked the page. Plus, you’ll be able to increase the feedback rate on content (likes + comments)." ] }
1603.01895
2294792789
In the voter model, each node of a graph has an opinion, and in every round each node chooses independently a random neighbour and adopts its opinion. We are interested in the consensus time, which is the first point in time where all nodes have the same opinion. We consider dynamic graphs in which the edges are rewired in every round (by an adversary) giving rise to the graph sequence @math , where we assume that @math has conductance at least @math . We assume that the degrees of nodes don't change over time as one can show that the consensus time can become super-exponential otherwise. In the case of a sequence of @math -regular graphs, we obtain asymptotically tight results. Even for some static graphs, such as the cycle, our results improve the state of the art. Here we show that the expected number of rounds until all nodes have the same opinion is bounded by @math , for any graph with @math edges, conductance @math , and degrees at least @math . In addition, we consider a biased dynamic voter model, where each opinion @math is associated with a probability @math , and when a node chooses a neighbour with that opinion, it adopts opinion @math with probability @math (otherwise the node keeps its current opinion). We show for any regular dynamic graph, that if there is an @math difference between the highest and second highest opinion probabilities, and at least @math nodes have initially the opinion with the highest probability, then all nodes adopt w.h.p. that opinion. We obtain a bound on the convergences time, which becomes @math for static graphs.
The standard voter model was first analysed in @cite_9 . The authors of @cite_9 bound the expected coalescing time (and thus the expected consensus time) in terms of the expected meeting time @math of two random walks and show a bound of @math . Note that the meeting time is an obvious lower bound on the coalescing time, and thus a lower bound on the consensus time when all nodes have distinct opinions initially. The authors of @cite_8 provide an improved upper bound of @math on the expected coalescing time for any graph @math , where @math is the second eigenvalue of the transition matrix of a random walk on @math , and @math is the ratio of the square of the sum of node degrees over the sum of the squared degrees. The value of @math ranges from @math , for the star graph, to @math , for regular graphs.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2009169450", "2963594409" ], "abstract": [ "This paper considers a probabilistic local polling process, examines its properties, and proposes its use in the context of distributed network protocols for achieving consensus. The resulting consensus algorithm is very simple and lightweight, yet it enjoys some desirable properties, such as proportionate agreement (namely, reaching a consensus value of one with probability proportional to the number of ones in the inputs), resilience against dynamic link failures and recoveries, and (weak) self-stabilization. The paper also investigates the maximum influence of small sets and establishes results analogous to those obtained for the problem in the deterministic polling model. 2001 Elsevier Science", "In a coalescing random walk, a set of particles make independent discrete-time random walks on a graph. Whenever one or more particles meet at a vertex, they unite to form a single particle, which then continues a random walk through the graph. Let @math be an undirected and connected graph with @math vertices and @math edges. The coalescence time, @math , is the expected time for all particles to coalesce, when initially one particle is located at each vertex. We study the problem of bounding the coalescence time for general connected graphs and prove that @math . Here @math is the second eigenvalue of the transition matrix of the random walk. To avoid problems arising from, e.g., lack of coalescence on bipartite graphs, we assume the random walk can be made lazy if required. The value of @math is given by @math , where @math is the degree of vertex @math , and @math is the average degree. The parame..." ] }
1603.01895
2294792789
In the voter model, each node of a graph has an opinion, and in every round each node chooses independently a random neighbour and adopts its opinion. We are interested in the consensus time, which is the first point in time where all nodes have the same opinion. We consider dynamic graphs in which the edges are rewired in every round (by an adversary) giving rise to the graph sequence @math , where we assume that @math has conductance at least @math . We assume that the degrees of nodes don't change over time as one can show that the consensus time can become super-exponential otherwise. In the case of a sequence of @math -regular graphs, we obtain asymptotically tight results. Even for some static graphs, such as the cycle, our results improve the state of the art. Here we show that the expected number of rounds until all nodes have the same opinion is bounded by @math , for any graph with @math edges, conductance @math , and degrees at least @math . In addition, we consider a biased dynamic voter model, where each opinion @math is associated with a probability @math , and when a node chooses a neighbour with that opinion, it adopts opinion @math with probability @math (otherwise the node keeps its current opinion). We show for any regular dynamic graph, that if there is an @math difference between the highest and second highest opinion probabilities, and at least @math nodes have initially the opinion with the highest probability, then all nodes adopt w.h.p. that opinion. We obtain a bound on the convergences time, which becomes @math for static graphs.
The authors of @cite_4 consider a modification of the standard voter model with two opinions, which they call two-sample voting . In every round, each node chooses two of its neighbours randomly and adopts their opinion only if they both agree. For regular graphs and random regular graphs, it is shown that two-sample voting has a consensus time of @math if the initial imbalance between the nodes having the two opinions is large enough. There are several other works on the setting where every node contacts in every round two or more neighbours before adapting its opinion @cite_20 @cite_12 @cite_1 @cite_21 .
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_1", "@cite_20", "@cite_12" ], "mid": [ "283401411", "", "2237112965", "2121696574", "1981563978" ], "abstract": [ "Distributed voting is a fundamental topic in distributed computing. In pull voting, in each step every vertex chooses a neighbour uniformly at random, and adopts its opinion. The voting is completed when all vertices hold the same opinion. On many graph classes including regular graphs, pull voting requires Ω(n) expected steps to complete, even if initially there are only two distinct opinions.", "", "Distributed voting is a fundamental topic in distributed computing. In the standard model of pull voting, at each step every vertex chooses a neighbour uniformly at random and adopts its opinion. The voting is completed when all vertices hold the same opinion. In the simplest case, each vertex initially holds one of two different opinions. This partitions the vertices into arbitrary sets A and B. For many graphs, including regular graphs and irrespective of their expansion properties, if both A and B are sufficiently large sets, then pull voting requires @math expected steps, where n is the number of vertices of the graph. In this paper we consider a related class of voting processes based on sampling two opinions. In the simplest case, every vertex v chooses two random neighbours at each step. If both these neighbours have the same opinion, then v adopts this opinion. Otherwise, v keeps its own opinion. Let G be a connected graph with n vertices and m edges. Let P be the transition matrix of a simple random walk on G with second largest eigenvalue @math . We show that if the initial imbalance in degree between the two opinions satisfies @math , then with high probability voting completes in @math steps, and the opinion with the larger initial degree wins. The condition that @math , or only a bound on the conductance of the graph is known, the sampling process can be modified so that voting still provably completes in @math steps with high probability. The modification uses two sampling based on probing to a fixed depth @math from any vertex. In its most general form our voting process allows vertices to bias their sampling of opinions among their neighbours to achieve a desired outcome. This is done by allocating weights to edges.", "Suppose in a graph G vertices can be either red or blue. Let k be odd. At each time step, each vertex v in G polls k random neighbours and takes the majority colour. If it does not have k neighbours, it simply polls all of them, or all less one if the degree of v is even. We study this protocol on graphs of a given degree sequence, in the following setting: initially each vertex of G is red independently with probability α < 1 2 , and is otherwise blue. We show that if α is sufficiently biased, then with high probability consensus is reached on the initial global majority within O ( log k log k n ) steps if 5 ? k ? d , and O ( log d log d n ) steps if k d . Here, d ? 5 is the effective minimum degree, the smallest integer which occurs ? ( n ) times in the degree sequence. We further show that on such graphs, any local protocol in which a vertex does not change colour if all its neighbours have that same colour, takes time at least ? ( log d log d n ) , with high probability. Additionally, we demonstrate how the technique for the above sparse graphs can be applied in a straightforward manner to get bounds for the Erd?s-Renyi random graphs in the connected regime.", "In this paper, we consider lightweight decentralised algorithms for achieving consensus in distributed systems. Each member of a distributed group has a private value from a fixed set consisting of, say, two elements, and the goal is for all members to reach consensus on the majority value. We explore variants of the voter model applied to this problem. In the voter model, each node polls a randomly chosen group member and adopts its value. The process is repeated until consensus is reached. We generalise this so that each member polls a (deterministic or random) number of other group members and changes opinion only if a suitably defined super-majority has a different opinion. We show that this modification greatly speeds up the convergence of the algorithm, as well as substantially reducing the probability of it reaching consensus on the incorrect value." ] }
1603.01895
2294792789
In the voter model, each node of a graph has an opinion, and in every round each node chooses independently a random neighbour and adopts its opinion. We are interested in the consensus time, which is the first point in time where all nodes have the same opinion. We consider dynamic graphs in which the edges are rewired in every round (by an adversary) giving rise to the graph sequence @math , where we assume that @math has conductance at least @math . We assume that the degrees of nodes don't change over time as one can show that the consensus time can become super-exponential otherwise. In the case of a sequence of @math -regular graphs, we obtain asymptotically tight results. Even for some static graphs, such as the cycle, our results improve the state of the art. Here we show that the expected number of rounds until all nodes have the same opinion is bounded by @math , for any graph with @math edges, conductance @math , and degrees at least @math . In addition, we consider a biased dynamic voter model, where each opinion @math is associated with a probability @math , and when a node chooses a neighbour with that opinion, it adopts opinion @math with probability @math (otherwise the node keeps its current opinion). We show for any regular dynamic graph, that if there is an @math difference between the highest and second highest opinion probabilities, and at least @math nodes have initially the opinion with the highest probability, then all nodes adopt w.h.p. that opinion. We obtain a bound on the convergences time, which becomes @math for static graphs.
There are several other models which are related to the voter model, most notably the Moran process and rumor spreading in the phone call model. In the case of the Moran process, a population resides on the vertices of a graph. The initial population consists of one mutant with fitness @math and the rest of the nodes are non-mutants with fitness 1. In every round, a node is chosen at random with probability proportional to its fitness. This node then reproduces by placing a copy of itself on a randomly chosen neighbour, replacing the individual that was there. The main quantities of interest are the probability that the mutant occupies the whole graph (fixation) or vanishes (extinction), together with the time before either of the two states is reached (absorption time). There are several publications considering the fixation probabilities @cite_14 @cite_10 @cite_7 .
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_7" ], "mid": [ "", "2205121374", "1789332325" ], "abstract": [ "", "This work extends what is known so far for a basic model of evolutionary antagonism in undirected networks (graphs). More specifically, this work studies the generalized Moran process, as introduced by Lieberman, Hauert, and Nowak [Nature, 433:312-316, 2005], where the individuals of a population reside on the vertices of an undirected connected graph. The initial population has a single mutant of a fitness value r (typically r>1), residing at some vertex v of the graph, while every other vertex is initially occupied by an individual of fitness 1. At every step of this process, an individual (i.e. vertex) is randomly chosen for reproduction with probability proportional to its fitness, and then it places a copy of itself on a random neighbor, thus replacing the individual that was residing there. The main quantity of interest is the fixation probability, i.e. the probability that eventually the whole graph is occupied by descendants of the mutant. In this work we concentrate on the fixation probability when the mutant is initially on a specific vertex v, thus refining the older notion of which studied the fixation probability when the initial mutant is placed at a random vertex. We then aim at finding graphs that have many \"strong starts\" (or many \"weak starts\") for the mutant. Thus we introduce a parameterized notion of selective amplifiers (resp. selective suppressors) of evolution. We prove the existence of strong selective amplifiers (i.e. for h(n)=Θ(n) vertices v the fixation probability of v is at least @math for a function c(r) that depends only on r), and the existence of quite strong selective suppressors. Regarding the traditional notion of fixation probability from a random start, we provide strong upper and lower bounds: first we demonstrate the non-existence of \"strong universal\" amplifiers, and second we prove the Thermal Theorem which states that for any undirected graph, when the mutant starts at vertex v, the fixation probability at least @math . This theorem (which extends the \"Isothermal Theorem\" of for regular graphs) implies an almost tight lower bound for the usual notion of fixation probability. Our proof techniques are original and are based on new domination arguments which may be of general interest in Markov Processes that are of the general birth-death type.", "The Moran process models the spread of mutations in populations on graphs. We investigate the absorption time of the process, which is the time taken for a mutation introduced at a randomly chosen vertex to either spread to the whole population, or to become extinct. It is known that the expected absorption time for an advantageous mutation is O(n^4) on an n-vertex undirected graph, which allows the behaviour of the process on undirected graphs to be analysed using the Markov chain Monte Carlo method. We show that this does not extend to directed graphs by exhibiting an infinite family of directed graphs for which the expected absorption time is exponential in the number of vertices. However, for regular directed graphs, we show that the expected absorption time is Omega(n log n) and O(n^2). We exhibit families of graphs matching these bounds and give improved bounds for other families of graphs, based on isoperimetric number. Our results are obtained via stochastic dominations which we demonstrate by establishing a coupling in a related continuous-time model. The coupling also implies several natural domination results regarding the fixation probability of the original (discrete-time) process, resolving a conjecture of Shakarian, Roos and Johnson." ] }
1603.00968
2291623749
We introduce a novel, simple convolution neural network (CNN) architecture - multi-group norm constraint CNN (MGNC-CNN) that capitalizes on multiple sets of word embeddings for sentence classification. MGNC-CNN extracts features from input embedding sets independently and then joins these at the penultimate layer in the network to form a final feature vector. We then adopt a group regularization strategy that differentially penalizes weights associated with the subcomponents generated from the respective embedding sets. This model is much simpler than comparable alternative architectures and requires substantially less training time. Furthermore, it is flexible in that it does not require input word embeddings to be of the same dimensionality. We show that MGNC-CNN consistently outperforms baseline models.
Prior work has considered combining latent representations of words that capture syntactic and semantic properties @cite_8 , and inducing multi-modal embeddings @cite_2 for general NLP tasks. And recently, proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.
{ "cite_N": [ "@cite_2", "@cite_8" ], "mid": [ "2137735870", "2143645432" ], "abstract": [ "Our research aims at building computational models of word meaning that are perceptually grounded. Using computer vision techniques, we build visual and multimodal distributional models and compare them to standard textual models. Our results show that, while visual models with state-of-the-art computer vision techniques perform worse than textual models in general tasks (accounting for semantic relatedness), they are as good or better models of the meaning of words with visual correlates such as color terms, even in a nontrivial task that involves nonliteral uses of such words. Moreover, we show that visual and textual information are tapping on different aspects of meaning, and indeed combining them in multimodal models often improves performance.", "This paper presents a novel method for the computation of word meaning in context. We make use of a factorization model in which words, together with their window-based context words and their dependency relations, are linked to latent dimensions. The factorization model allows us to determine which dimensions are important for a particular context, and adapt the dependency-based feature vector of the word accordingly. The evaluation on a lexical substitution task -- carried out for both English and French -- indicates that our approach is able to reach better results than state-of-the-art methods in lexical substitution, while at the same time providing more accurate meaning representations." ] }
1603.01354
2295030615
State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55 accuracy for POS tagging and 91.21 F1 for NER.
There are several other neural networks previously proposed for sequence labeling. proposed a RNN-CNNs model for German POS tagging. This model is similar to the LSTM-CNNs model in , with the difference of using vanila RNN instead of LSTM. Another neural architecture employing CNN to model character-level information is the CharWNN'' architecture @cite_29 which is inspired by the feed-forward network @cite_42 . CharWNN obtained near state-of-the-art accuracy on English POS tagging (see for details). A similar model has also been applied to Spanish and Portuguese NER @cite_23 and also used BSLTM to compose character embeddings to word's representation, which is similar to . Improved NER for Chinese Social Media with Word Segmentation.
{ "cite_N": [ "@cite_29", "@cite_42", "@cite_23" ], "mid": [ "2101609803", "2158899491", "1951325712" ], "abstract": [ "Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syntactic and semantic information about words. Information about word morphology and shape is normally ignored when learning word representations. However, for tasks like part-of-speech tagging, intra-word information is extremely useful, specially when dealing with morphologically rich languages. In this paper, we propose a deep neural network that learns character-level representation of words and associate them with usual word representations to perform POS tagging. Using the proposed approach, while avoiding the use of any handcrafted feature, we produce state-of-the-art POS taggers for two languages: English, with 97.32 accuracy on the Penn Treebank WSJ corpus; and Portuguese, with 97.47 accuracy on the Mac-Morpho corpus, where the latter represents an error reduction of 12.2 on the best previous known result.", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "Most state-of-the-art named entity recognition (NER) systems rely on handcrafted features and on the output of other NLP tasks such as part-of-speech (POS) tagging and text chunking. In this work we propose a language-independent NER system that uses automatically learned features only. Our approach is based on the CharWNN deep neural network, which uses word-level and character-level representations (embeddings) to perform sequential classification. We perform an extensive number of experiments using two annotated corpora in two different languages: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in Spanish. Our experimental results shade light on the contribution of neural character embeddings for NER. Moreover, we demonstrate that the same neural network which has been successfully applied to POS tagging can also achieve state-of-the-art results for language-independet NER, using the same hyperparameters, and without any handcrafted features. For the HAREM I corpus, CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score for the total scenario (ten NE classes), and by 7.2 points in the F1 for the selective scenario (five NE classes)." ] }
1603.01003
2292019648
We introduce the so-called naive tests and give a brief review of the new developments. Naive testing methods are easy to understand and perform robustly, especially when the dimension is large. We focus mainly on reviewing some naive testing methods for the mean vectors and covariance matrices of high-dimensional populations, and we believe that this naive testing approach can be used widely in many other testing problems.
@cite_10 proposed another statistic: which is called the regularized Hotelling @math test. The idea is to employ the technique of ridge regression to stabilize the inverse of the sample covariance matrix given in ). Assuming that the underlying distribution is normally distributed, it is proven that under the null hypothesis, for any @math , as @math where @math and @math . They also give an asymptotic approximation method for selecting the tuning parameter @math in the regularization. Recently, based on a supervised-learning strategy, Shen and Lin @cite_5 proposed a statistic to select an optimal subset of features to maximize the asymptotic power of the Hotelling @math test.
{ "cite_N": [ "@cite_5", "@cite_10" ], "mid": [ "2120633692", "2022771765" ], "abstract": [ "The problem of testing the mean vector in a high-dimensional setting is considered. Up to date, most high-dimensional tests for the mean vector only make use of the marginal information from the variables, and do not incorporate the correlation information into the test statistics. A new testing procedure is proposed, which makes use of the covariance information between the variables. The new approach is novel in that it can select important variables that contain evidence against the null hypothesis and reduce the impact of noise accumulation. Simulations and real data analysis demonstrate that the new test has higher power than some competing methods proposed in the literature.", "Recent proteomic studies have identified proteins related to specific phenotypes. In addition to marginal association analysis for individual proteins, analyzing pathways (functionally related sets of proteins) may yield additional valuable insights. Identifying pathways that differ between phenotypes can be conceptualized as a multivariate hypothesis testing problem: whether the mean vector μ of a p-dimensional random vector X is μ0. Proteins within the same biological pathway may correlate with one another in a complicated way, and Type I error rates can be inflated if such correlations are incorrectly assumed to be absent. The inflation tends to be more pronounced when the sample size is very small or there is a large amount of missingness in the data, as is frequently the case in proteomic discovery studies. To tackle these challenges, we propose a regularized Hotelling’s T2 (RHT) statistic together with a nonparametric testing procedure, which effectively controls the Type I error rate and maintains ..." ] }
1603.01003
2292019648
We introduce the so-called naive tests and give a brief review of the new developments. Naive testing methods are easy to understand and perform robustly, especially when the dimension is large. We focus mainly on reviewing some naive testing methods for the mean vectors and covariance matrices of high-dimensional populations, and we believe that this naive testing approach can be used widely in many other testing problems.
The Random Projection was first proposed by @cite_40 and was further discussed in later studies @cite_39 @cite_34 @cite_18 @cite_41 . For Gaussian data, the procedure projects high-dimensional data onto random subspaces of relatively low-dimensional spaces to allow the traditional Hotelling @math statistic to work well. This method can be viewed as a two-step procedure. First, a single random projection is drawn, and it is then used to map the samples from the high-dimensional space to a low-dimensional space. Second, the Hotelling @math test is applied to a new hypothesis-testing problem in the projected space. A decision is then returned to the original problem by simply rejecting @math whenever the Hotelling test rejects it in the projected spaces.
{ "cite_N": [ "@cite_18", "@cite_41", "@cite_39", "@cite_40", "@cite_34" ], "mid": [ "", "2205780429", "2050811968", "2950371686", "2951635992" ], "abstract": [ "", "A common problem in modern genetic research is that of comparing the mean vectors of two populations-typically in settings in which the data dimension is larger than the sample size-where Hotelling's test cannot be applied.Recently, a test using random subspaces was proposed, in which the data are randomly projected into several lower-dimensional subspaces, and Hotelling's test is well defined. Superior performance with competing tests was demonstrated when the variables were correlated.Following the research of random subspaces, a modified test was proposed that might make more efficient use of covariance structure at high dimension. Hierarchical clustering is performed first such that highly correlated variables are clustered together. Next, Hotelling's statistics are computed for every cluster-subspace and summed as the new test statistic. High performance was demonstrated via simulations and real data analysis. A two-sample test using hierarchical clustering was proposed.Hotelling's statistics are computed in cluster-subspaces and summed as the statistic.Highly correlated variables take priority for being processed.A cutoff distance is used to restrain the effect of statistical fluctuations.High performance was demonstrated in simulations and real data analysis.", "A common problem in genetics is that of testing whether a set of highly dependent gene expressions differ between two populations, typically in a high-dimensional setting where the data dimension is larger than the sample size. Most high-dimensional tests for the equality of two mean vectors rely on naive diagonal or trace estimators of the covariance matrix, ignoring dependences between variables. A test using random subspaces is proposed, which offers higher power when the variables are dependent and is invariant under linear transformations of the marginal distributions. The p-values for the test are obtained using permutations. The test does not rely on assumptions about normality or the structure of the covariance matrix. It is shown by simulation that the new test has higher power than competing tests in realistic settings motivated by microarray gene expression data. Computational aspects of high-dimensional permutation tests are also discussed and an efficient R implementation of the proposed test is provided.", "We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Specifically, we propose a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T^2 statistic. Working under a high-dimensional framework with (p,n) tending to infinity, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from synthetic data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure's false positive rate with comparisons on high-dimensional gene expression data involving the discrimination of different types of cancer.", "Motivated by the prevalence of high dimensional low sample size datasets in modern statistical applications, we propose a general nonparametric framework, Direction-Projection-Permutation (DiProPerm), for testing high dimensional hypotheses. The method is aimed at rigorous testing of whether lower dimensional visual differences are statistically significant. Theoretical analysis under the non-classical asymptotic regime of dimension going to infinity for fixed sample size reveals that certain natural variations of DiProPerm can have very different behaviors. An empirical power study both confirms the theoretical results and suggests DiProPerm is a powerful test in many settings. Finally DiProPerm is applied to a high dimensional gene expression dataset." ] }
1603.01003
2292019648
We introduce the so-called naive tests and give a brief review of the new developments. Naive testing methods are easy to understand and perform robustly, especially when the dimension is large. We focus mainly on reviewing some naive testing methods for the mean vectors and covariance matrices of high-dimensional populations, and we believe that this naive testing approach can be used widely in many other testing problems.
Some other related work on tests of high-dimensional locations can be found in @cite_12 @cite_47 @cite_33 @cite_43 @cite_52 @cite_37 @cite_16 , which we do not discuss at length in this paper.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_52", "@cite_43", "@cite_47", "@cite_16", "@cite_12" ], "mid": [ "1560916211", "1892744769", "987533453", "2016547524", "1986818650", "863497847", "1499910526" ], "abstract": [ "This article is concerned with simultaneous tests on linear regression coefficients in high-dimensional settings. When the dimensionality is larger than the sample size, the classic @math -test is not applicable since the sample covariance matrix is not invertible. In order to overcome this issue, both Goeman, Finos and van Houwelingen (2011) and Zhong and Chen (2011) proposed their test procedures after excluding the @math term in @math -statistics. However, both these two test are not invariant under the group of scalar transformations. In order to treat those variables in a fair' way, we proposed a new test statistic and establish its asymptotically normal under certain mild conditions. Simulation studies showed that our test procedure performs very well in many cases.", "Summary The structural information in high-dimensional transposable data allows us to write the data recorded for each subject in a matrix such that both the rows and the columns correspond to variables of interest. One important problem is to test the null hypothesis that the mean matrix has a particular structure without ignoring the dependence structure among and or between the row and column variables. To address this, we develop a generic and computationally inexpensive nonparametric testing procedure to assess the hypothesis that, in each predefined subset of columns (rows), the column (row) mean vector remains constant. In simulation studies, the proposed testing procedure seems to have good performance and, unlike simple practical approaches, it preserves the nominal size and remains powerful even if the row and or column variables are not independent. Finally, we illustrate the use of the proposed methodology via two empirical examples from gene expression microarrays.", "In this article, we propose new multivariate two-sample tests based on nearest neighbor type coincidences. While several existing tests for the multivariate two-sample problem perform poorly for high dimensional data, and many of them are not applicable when the dimension exceeds the sample size, these proposed tests can be conveniently used in the high dimension low sample size (HDLSS) situations. Unlike Schilling (1986) [26] and Henze’s (1988) test based on nearest neighbors, under fairly general conditions, these new tests are found to be consistent in HDLSS asymptotic regime, where the sample size remains fixed and the dimension grows to infinity. Several high dimensional simulated and real data sets are analyzed to compare their empirical performance with some popular two-sample tests available in the literature. We further investigate the behavior of these proposed tests in classical asymptotic regime, where the dimension of the data remains fixed and the sample size tends to infinity. In such cases, they turn out to be asymptotically distribution-free and consistent under general alternatives.", "The Wilcoxon–Mann–Whitney test is a robust competitor of the @math test in the univariate setting. For finite-dimensional multivariate non-Gaussian data, several extensions of the Wilcoxon–Mann–Whitney test have been shown to outperform Hotelling's @math test. In this paper, we study a Wilcoxon–Mann–Whitney-type test based on spatial ranks in infinite-dimensional spaces, we investigate its asymptotic properties and compare it with several existing tests. The proposed test is shown to be robust with respect to outliers and to have better power than some competitors for certain distributions with heavy tails. We study its performance using real and simulated data.", "The multivariate two-sample testing problem has been well investigated in the literature, and several parametric and nonparametric methods are available for it. However, most of these two-sample tests perform poorly for high dimensional data, and many of them are not applicable when the dimension of the data exceeds the sample size. In this article, we propose a multivariate two-sample test that can be conveniently used in the high dimension low sample size setup. Asymptotic results on the power properties of our proposed test are derived when the sample size remains fixed, and the dimension of the data grows to infinity. We investigate the performance of this test on several high-dimensional simulated and real data sets, and demonstrate its superiority over several other existing two-sample tests. We also study some theoretical properties of the proposed test for situations when the dimension of the data remains fixed and the sample size tends to infinity. In such cases, it turns out to be asymptotically distribution-free and consistent under general alternatives.", "We propose a new scalar and shift transform invariant test statistic for the high-dimensional two-sample location problem. Theoretical results and simulation studies show the good performance of our test under certain circumstances.", "We discuss a one-sample location test that can be used in the case of highdimensional data. For high-dimensional data, the power of Hotelling’s test decreases when the dimension is close to the sample size. To address this loss of power, some non-exact approaches were proposed, e.g., Dempster (1958, 1960), Bai and Saranadasa (1996) and Srivastava and Du (2006). In this paper, we focus on Hotelling’s test and Dempster’s test. The comparative merits and demerits of these two tests vary according to the local parameters. In particular, we consider the situation where it is difficult to determine which test should be used, that is, where the two tests are asymptotically equivalent in terms of local power. We propose a new statistic based on the weighted averaging of Hotelling’s T 2 statistic and Dempster’s statistic that can be applied in such a situation. Our weight is determined on the basis of the maximum local asymptotic power on a restricted parameter space that induces local asymptotic equivalence between Hotelling’s test and Dempster’s test. In addition, some good asymptotic properties with respect to the local power are shown. Numerical results show that our test is more stable than Hotelling’s T 2 statistic and Dempster’s statistic in most parameter settings." ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
Static if-conversion depends principally on information such as the misprediction penalty and the number of cycles within the body; that can be collected in an offline analysis (profiling) before runtime. There are many techniques in the literature that adopt static if-conversion which are described below. A compilation framework that delays the if-conversion to schedule time is designed to allow the compiler to minimize runtime by balancing the control-flow and predicated branches @cite_26 . The authors in @cite_15 present an algorithm to perform if-conversion selectively on out-of-order processors that support dynamic speculation and guarded execution. They identified three criteria to measure the profitability of the conversion namely based on size, predictability and profile. The effect of their technique on the net cycles, mispredictions and mis-speculation is exhibited in their paper. Another algorithm is reported in @cite_17 for the Itanium architecture; it initially operates on unpredicated code, and the if-conversion optimization is performed late in the compilation process. That generates faster (less runtime) and denser (fewer instructions) code.
{ "cite_N": [ "@cite_15", "@cite_26", "@cite_17" ], "mid": [ "2072469765", "2163599246", "" ], "abstract": [ "Modern dynamically scheduled processors use branch prediction hardware to speculatively fetch and execute most likely executed paths in a program. Complex branch predictors have been proposed which attempt to identify these paths accurately such that the hardware can benefit from out-of-order (OOO) execution. Recent studies have shown that inspite of such complex prediction schemes, there still exist many frequently executed branches which are difficult to predict. Predicated execution has been proposed as an alternative technique to eliminate some of these branches in various forms ranging from a restrictive support to a full-blown support. We call the restrictive form of predicated execution as guarded execution. In this paper, we propose a new algorithm which uses profiling and selectively performs if-conversion for architectures with guarded execution support. Branch profiling is used to gather the taken, non-taken and misprediction counts for every branch. This combined with block profiling is used to select paths which suffer from heavy mispredictions and are profitable to if-convert. Effects of three different selection criterias, namely size-based, predictability-based and profiled-based, on net cycle improvements, branch mispredictions and mis-speculated instructions are then studied. We also propose new mechanisms to convert unsafe instructions to safe form to enhance the applicability of the technique. Finally, we explain numerous adjustments that were made to the selection criterias to better reflect the OOO processor behavior.", "Predicated execution is a promising architectural feature for exploiting instruction-level parallelism in the presence of control flow. Compiling for predicated execution involves converting program control flow into conditional, or predicated, instructions. This process is known as if-conversion. In order to effectively apply if-conversion, one must address two major issues: what should be if-converted and when the if-conversion should be applied. A compiler's use of predication as a representation is most effective when large amounts of code are if-converted and if-conversion is performed early in the compilation procedure. On the other hand the final code generated for a processor with predicated execution requires a delicate balance between control flow and predication to achieve efficient execution. The appropriate balance is tightly coupled with scheduling decisions and detailed processor characteristics. This paper presents an effective compilation framework that allows the compiler to maximize the benefits of predication as a compiler representation while delaying the final balancing of control flow and predication to schedule time.", "" ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
Further, an algorithm that minimizes the number of predicates assigned to basic blocks, which are assigned as early as possible using dominance relations to relax dependence constraints, is shown in @cite_31 . Moreover, in @cite_19 the program control-flow is represented as a graph (called program decision logic network), then it is modeled in a Boolean equation, which is then minimized and used to regenerate predicated code.
{ "cite_N": [ "@cite_19", "@cite_31" ], "mid": [ "2171299717", "1522608505" ], "abstract": [ "Modern compilers must expose sufficient amounts of Instruction-Level Parallelism (ILP) to achieve the promised performance increases of superscalar and VLIW processors. One of the major impediments to achieving this goal has been inefficient programmatic control flow. Historically, the compiler has translated the programmer's original control structure directly into assembly code with conditional branch instructions. Eliminating inefficiencies in handling branch instructions and exploiting ILP has been the subject of much research. However, traditional branch handling techniques cannot significantly alter the program's inherent control structure. The advent of predication as a program control representation has enabled compilers to manipulate control in a form more closely related to the underlying program logic. This work takes full advantage of the predication paradigm by abstracting the program control flow into a logical form referred to as a program decision logic network. This network is modeled as a Boolean equation and minimized using modified versions of logic synthesis techniques. After minimization, the more efficient version of the program's original control flow is re-expressed in predicated code. Furthermore, this paper proposes extensions to the HPL PlayDoh predication model in support of more effective predicate decision logic network minimization. Finally, this paper shows the ability of the mechanisms presented to overcome limits on ILP previously imposed by rigid program control structure.", "Instruction level parallelism has become more and more important in today's microprocessor design. These microprocessors have multiple function units, which can execute more than one instruction at the same machine cycle to enhance the uniprocessor performance. Since the function units are usually pipelined in such microprocessors, branch misprediction penalty tremendously degrades the CPU performance. In order to reduce the branch misprediction penalty, predicated operation has been introduced in such microprocessor design as one of the new architectural features, which allows compilers to remove branches from programs." ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
An algorithm that uses dynamic programming to generate code for different target architectures that support predicated execution is discussed in @cite_27 . Another approach handcrafts well-known algorithms (i.e. sorting and searching) into a constant number of branch-free loops, such that a branch predictor can achieve O(1) mispredictions @cite_20 .
{ "cite_N": [ "@cite_27", "@cite_20" ], "mid": [ "2130359642", "1889773849" ], "abstract": [ "Retargetable C compilers are key components of today's embedded processor design platforms for quickly obtaining compiler support and performing early processor architecture exploration. The inherent problem of the retargetable compilation approach, though, is the well known trade-off between the compiler's flexibility and the quality of generated code. However, it can be circumvented by designing flexible, configurable code optimization techniques applicable to a certain range of target architectures. This paper focuses on target machines with predicated execution support which is wide-spread in deeply pipelined and highly parallel embedded processors used in next generation high-end video, multimedia and wireless devices. We present an efficient and quickly retargetable code optimization technique for predicated execution that is integrated into an industrial retargetable C compiler. Experimental results for several embedded processors demonstrate that the proposed technique is applicable to real-life target machines and that it produces significant code quality improvements for control intensive applications.", "According to a folk theorem, every program can be transformed into a program that produces the same output and only has one loop. We generalize this to a form where the resulting program has one loop and no other branches than the one associated with the loop control. For this branch, branch prediction is easy even for a static branch predictor. If the original program is of length κ, measured in the number of assembly-language instructions, and runs in t(n) time for an input of size n, the transformed program is of length O(κ) and runs in O(κt(n)) time. Normally sorting programs are short, but still κ may be too large for practical purposes. Therefore, we provide more efficient hand-tailored heapsort and mergesort programs. Our programs retain most features of the original programs--e.g. they perform the same number of element comparisons--and they induce O(1) branch mispredictions. On computers where branch mispredictions were expensive, some of our programs were, for integer data and small instances, faster than the counterparts in the GNU implementation of the C++ standard library." ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
Other methods use dynamic if-conversion in which a profiling process during runtime is used to capture some characteristics (e.g. misprediction rate) to be used in optimization. In @cite_24 , runtime information is used to construct a dynamic optimizer that complements the static one in a previously presented algorithm. It can convert branches, or reverse their conversion, targeting the higher performance based on profiling the program to discover the highly mispredicted branches. Although this algorithm chooses the conversions that improve performance, it does not consider any correlation between the different s which are most probably related to each other especially within the same function.
{ "cite_N": [ "@cite_24" ], "mid": [ "2142362286" ], "abstract": [ "Dynamic Optimization is an umbrella term that refers to any optimization of software that is performed after the initial compiles time. It is a complementary optimization opportunity that may greatly improve performance on any computer system, but plays an especially important role in statically scheduled code. Several groups are working on developing dynamic optimization systems, yet the area of dynamic optimization algorithms can still benefit from further research. We introduce a lightweight algorithm that can be used in any modern dynamic optimizer to balance control flow and predication based on actual runtime behavior. In addition, we study the effectiveness of predicting overall runtime behavior based on a small sample size. Preliminary results show that if we skip the warm-up period of programs, profiles based on a small sample size of a particular run can be quite representative of overall runtime behavior (up to 98 correlation). This profile information can be used effectively in a number of dynamic optimizations. We found that our dynamic if-conversion algorithm can use this collated profile data to incorporate actual branch misprediction rates into the if-conversion decision process. This method acts as an effective means for balancing the results of static if-conversion, achieving speedup values of up to 14.7 , and can be easily incorporated into modern dynamic optimizers." ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
Also, a hardware that uses runtime information to choose to convert only the hard-to-predict branches is presented in @cite_1 . The presented system provides two versions of the code: one can be predicately executed and the other using a branch predictor.
{ "cite_N": [ "@cite_1" ], "mid": [ "2078584104" ], "abstract": [ "We propose a mechanism in which the compiler generates code that can be executed either as predicated code or nonpredicated code. The compiler-generated code is the same as predicated code, except the predicated conditional branches are not removed - they are left intact in the program code. These conditional branches are called wish branches. The goal of wish branches is to use predicated execution for hard-to-predict dynamic branches, and branch prediction for easy-to-predict dynamic branches, thereby obtaining the best of both worlds. Wish loops, one class of wish branches, use predication to reduce the misprediction penalty for hard-to-predict backward (loop) branches" ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
Finally, there are methods that combine both static and dynamic methods: A tree-based model to make predictions using predication and vectorization techniques are presented in @cite_23 ; it introduces runtime performance ranking assuming that a trained model already exists. Meanwhile, data is laid out in memory in an architecture-conscious method. Random features IDs, thresholds, and regression values are used to generate features values in a features vector. Another contribution is proposed in @cite_29 , where a simple neural network (perceptron) hardware implementation is provided to improve branch prediction. Another method @cite_30 uses program-based static branch prediction based on neural networks and decision trees to map static features associated with each branch to a prediction.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_23" ], "mid": [ "2081211681", "2156484396", "2953233585" ], "abstract": [ "Correctly predicting the direction that branches will take is increasingly important in today's wide-issue computer architectures. The name program-based branch prediction is given to static branch prediction techniques that base their prediction on a program's structure. In this article, we investigate a new approach to program-based branch prediction that uses a body of existing programs to predict the branch behavior in a new program. We call this approach to program-based branch prediction evidence-based static prediction , or ESP. The main idea of ESP is that the behavior of a corpus of programs can be used to infer the behavior of new programs. In this article, we use neural networks and decision trees to map static features associated with each branch to a prediction that the branch will be taken. ESP shows significant advantages over other prediction mechanisms. Specifically, it is a program-based technique; it is effective across a range of programming languages and programming styles; and it does not rely on the use of expert-defined heuristics. In this article, we describe the application of ESP to the problem of static branch prediction and compare our results to existing program-based branch predictors. We also investigate the applicability of ESP across computer architectures, programming languages, compilers, and run-time systems. We provide results showing how sensitive ESP is to the number and type of static features and programs included in the ESP training sets, and we compare the efficacy of static branch prediction for subroutine libraries. Averaging over a body of 43 C and Fortran programs, ESP branch prediction results in a miss rate of 20 , as compared with the 25 miss rate obtained using the best existing program-based heuristics.", "This paper presents a new method for branch prediction. The key idea is to use one of the simplest possible neural networks, the perceptron, as an alternative to the commonly used two-bit counters. Our predictor achieves increased accuracy by making use of long branch histories, which are possible becasue the hardware resources for our method scale linearly with the history length. By contrast, other purely dynamic schemes require exponential resources. We describe our design and evaluate it with respect to two well known predictors. We show that for a 4K byte hardware budget our method improves misprediction rates for the SPEC 2000 benchmarks by 10.1 over the gshare predictor. Our experiments also provide a better understanding of the situations in which traditional predictors do and do not perform well. Finally, we describe techniques that allow our complex predictor to operate in one cycle.", "Tree-based models have proven to be an effective solution for web ranking as well as other problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, given an already-trained model. Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processor architectures. By laying out data structures in memory in a more cache-conscious fashion, removing branches from the execution flow using a technique called predication, and micro-batching predictions using a technique called vectorization, we are able to better exploit modern processor architectures and significantly improve the speed of tree-based models over hard-coded if-else blocks. Our work contributes to the exploration of architecture-conscious runtime implementations of machine learning algorithms." ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
Moreover, an attempt to construct an online ensemble learning framework consisting of small trees to solve the problem of hardware conditional branch prediction is discussed in @cite_21 . The learning based techniques (the third category) may be the only one that considered the effect of if-conversion on the whole program, as they train their systems over iterations to converge to the least possible runtime. Others make a separate decision for individual s regardless of the correlation with other ones. But, the problem is how fast they can evolve the selection space towards the best performing and that is what we handle in our system.
{ "cite_N": [ "@cite_21" ], "mid": [ "2491694318" ], "abstract": [ "We study resource-limited online learning, motivated by the problem of conditional-branch outcome prediction in computer architecture. In particular, we consider (parallel) time and space-efficient ensemble learners for online settings, empirically demonstrating benefits similar to those shown previously for offline ensembles. Our learning algorithms are inspired by the previously published “boosting by filtering” framework as well as the offline Arc-x4 boosting-style algorithm. We train ensembles of online decision trees using a novel variant of the ID4 online decision-tree algorithm as the base learner, and show empirical results for both boosting and bagging-style online ensemble methods. Our results evaluate these methods on both our branch prediction domain and online variants of three familiar machine-learning benchmarks. Our data justifies three key claims. First, we show empirically that our extensions to ID4 significantly improve performance for single trees and additionally are critical to achieving performance gains in tree ensembles. Second, our results indicate significant improvements in predictive accuracy with ensemble size for the boosting-style algorithm. The bagging algorithms we tried showed poor performance relative to the boosting-style algorithm (but still improve upon individual base learners). Third, we show that ensembles of small trees are often able to outperform large single trees with the same number of nodes (and similarly outperform smaller ensembles of larger trees that use the same total number of nodes). This makes online boosting particularly useful in domains such as branch prediction with tight space restrictions (i.e., the available real-estate on a microprocessor chip)." ] }
1603.01112
2288661800
Control-flow dependence is an intrinsic limiting factor for pro- gram acceleration. With the availability of instruction-level par- allel architectures, if-conversion optimization has, therefore, be- come pivotal for extracting parallelism from serial programs. While many if-conversion optimization heuristics have been proposed in the literature, most of them consider rigid criteria regardless of the underlying hardware and input programs. In this paper, we propose a novel if-conversion scheme that preforms an efficient if-conversion transformation using a machine learning technique (NEAT). This method enables if-conversion customization overall branches within a program unlike the literature that considered in- dividual branches. Our technique also provides flexibility required when compiling for heterogeneous systems. The efficacy of our approach is shown by experiments and reported results which il- lustrate that the programs can be accelerated on the same archi- tecture and without modifying the original code. Our technique applies for general purpose programming languages (e.g. C C++) and is transparent for the programmer. We implemented our tech- nique in LLVM 3.6.1 compilation infrastructure and experimented on the kernels of SPEC-CPU2006 v1.1 benchmarks suite running on a multicore system of Intel(R) Xeon(R) 3.50GHz processors. Our findings show a performance gain up to 8.6 over the stan- dard optimized code (LLVM -O2 with if-conversion included), in- dicating the need for If-conversion compilation optimization that can adapt to the unique characteristics of every individual branch.
As an example for research that considered using machine learning algorithms for efficient compiler optimization, ordering or transformation parameters tuning, we demonstrate some of them below. A compiler based approach for mapping parallelism to multicore processors is proposed in @cite_33 . This technique applies an off-line trained to develop a number of threads and scheduling predictors for parallel programs. An attempt of speeding up iterative compilation -through building models of the program features using machine learning techniques- was presented in @cite_25 . Authors in @cite_13 introduced a decision-tree based technique that generates compiler heuristics for the target processor considering the loop unrolling optimization as a source of performance features. @cite_5 . A per-method logistic regression technique is used to select proper optimizations for each method in the program depending on its features in @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_33", "@cite_5", "@cite_13", "@cite_25" ], "mid": [ "2091994380", "2166536280", "2153637321", "2143124065", "2168519934" ], "abstract": [ "Determining the best set of optimizations to apply to a program has been a long standing problem for compiler writers. To reduce the complexity of this task, existing approaches typically apply the same set of optimizations to all procedures within a program, without regard to their particular structure. This paper develops a new method-specific approach that automatically selects the best optimizations on a per method basis within a dynamic compiler. Our approach uses the machine learning technique of logistic regression to automatically derive a predictive model that determines which optimizations to apply based on the features of a method. This technique is implemented in the Jikes RVM Java JIT compiler. Using this approach we reduce the average total execution time of the SPECjvm98 benchmarks by 29 . When the same heuristic is applied to the DaCapo+ benchmark suite, we obtain an average 33 reduction over the default level O2 setting.", "The efficient mapping of program parallelism to multi-core processors is highly dependent on the underlying architecture. This paper proposes a portable and automatic compiler-based approach to mapping such parallelism using machine learning. It develops two predictors: a data sensitive and a data insensitive predictor to select the best mapping for parallel programs. They predict the number of threads and the scheduling policy for any given program using a model learnt off-line. By using low-cost profiling runs, they predict the mapping for a new unseen program across multiple input data sets. We evaluate our approach by selecting parallelism mapping configurations for OpenMP programs on two representative but different multi-core platforms (the Intel Xeon and the Cell processors). Performance of our technique is stable across programs and architectures. On average, it delivers above 96 performance of the maximum available on both platforms. It achieve, on average, a 37 (up to 17.5 times) performance improvement over the OpenMP runtime default scheme on the Cell platform. Compared to two recent prediction models, our predictors achieve better performance with a significant lower profiling cost.", "Empirical program optimizers estimate the values of key optimization parameters by generating different program versions and running them on the actual hardware to determine which values give the best performance. In contrast, conventional compilers use models of programs and machines to choose these parameters. It is widely believed that model-driven optimization does not compete with empirical optimization, but few quantitative comparisons have been done to date. To make such a comparison, we replaced the empirical optimization engine in ATLAS (a system for generating a dense numerical linear algebra library called the BLAS) with a model-driven optimization engine that used detailed models to estimate values for optimization parameters, and then measured the relative performance of the two systems on three different hardware platforms. Our experiments show that model-driven optimization can be surprisingly effective, and can generate code whose performance is comparable to that of code generated by empirical optimizers for the BLAS.", "Achieving high performance on modern processors heavily relies on the compiler optimizations to exploit the microprocessor architecture. The efficiency of optimization directly depends on the compiler heuristics. These heuristics must be target-specific and each new processor generation requires heuristics reengineering.In this paper, we address the automatic generation of optimization heuristics for a target processor by machine learning. We evaluate the potential of this method on an always legal and simple transformation: loop unrolling. Though simple to implement, this transformation may have strong effects on program execution (good or bad). However deciding to perform the transformation or not is difficult since many interacting parameters must be taken into account. So we propose a machine learning approach.We try to answer the following questions: is it possible to devise a learning process that captures the relevant parameters involved in loop unrolling performance? Does the Machine Learning Based Heuristics achieve better performance than existing ones?", "Iterative compiler optimization has been shown to outperform static approaches. This, however, is at the cost of large numbers of evaluations of the program. This paper develops a new methodology to reduce this number and hence speed up iterative optimization. It uses predictive modelling from the domain of machine learning to automatically focus search on those areas likely to give greatest performance. This approach is independent of search algorithm, search space or compiler infrastructure and scales gracefully with the compiler optimization space size. Off-line, a training set of programs is iteratively evaluated and the shape of the spaces and program features are modelled. These models are learnt and used to focus the iterative optimization of a new program. We evaluate two learnt models, an independent and Markov model, and evaluate their worth on two embedded platforms, the Texas Instrument C67I3 and the AMD Au1500. We show that such learnt models can speed up iterative search on large spaces by an order of magnitude. This translates into an average speedup of 1.22 on the TI C6713 and 1.27 on the AMD Au1500 in just 2 evaluations." ] }
1603.01076
2290061578
In this article we study the problem of document image representation based on visual features. We propose a comprehensive experimental study that compares three types of visual document image representations: (1) traditional so-called shallow features, such as the RunLength and the Fisher-Vector descriptors, (2) deep features based on Convolutional Neural Networks, and (3) features extracted from hybrid architectures that take inspiration from the two previous ones. We evaluate these features in several tasks (i.e. classification, clustering, and retrieval) and in different setups (e.g. domain transfer) using several public and in-house datasets. Our results show that deep features generally outperform other types of features when there is no domain shift and the new task is closely related to the one used to train the model. However, when a large domain or task shift is present, the Fisher-Vector shallow features generalize better and often obtain the best results.
Some more elaborate representations, such as the RunLength histograms @cite_15 @cite_19 @cite_22 , have shown to be more generic and hence better suited for document image representation. Many of these representations can be combined with spatial pyramids @cite_3 to explicitly add a coarse structure, leading to higher accuracies at the cost of higher-dimensional representations. However, in general, all these traditional features contain relatively limited amount of information and while they might perform well on a specific dataset and task for which they were designed, they are not generic enough to be able to handle various document class types, datasets and tasks.
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_22", "@cite_3" ], "mid": [ "171088530", "2091822342", "", "2162915993" ], "abstract": [ "We describe a simple, fast, and accurate system for document image zone classification — an important subproblem of document image analysis — that results from a detailed analysis of different features. Using a novel combination of known algorithms, we achieve a very competitive error rate of 1.46 (n = 13811) in comparison to (, 2006) who report an error rate of 1.55 (n = 24177) using more complicated techniques. The experiments were performed on zones extracted from the widely used UW-III database, which is representative of images of scanned journal pages and contains ground-truthed real-world data.", "Abstract Color histogram is the most commonly used color feature in image retrieval systems. However, this feature cannot effectively characterize an image, since it only captures the global properties. To make the retrieval more accurate, this paper introduces a run-length (RL) feature. The feature integrates the information of color and shape of the objects in an image. It can effectively discriminate the directions, areas and geometrical shapes of the objects. Yet, extracting the RL feature is time-consuming. For that reason, this paper also provides one revised representation of the RL, called semi-run-length (SRL). Based on the SRL feature, this paper develops an image retrieval system, and the experimental results show that the system gives an impressive performance.", "", "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors." ] }
1603.01076
2290061578
In this article we study the problem of document image representation based on visual features. We propose a comprehensive experimental study that compares three types of visual document image representations: (1) traditional so-called shallow features, such as the RunLength and the Fisher-Vector descriptors, (2) deep features based on Convolutional Neural Networks, and (3) features extracted from hybrid architectures that take inspiration from the two previous ones. We evaluate these features in several tasks (i.e. classification, clustering, and retrieval) and in different setups (e.g. domain transfer) using several public and in-house datasets. Our results show that deep features generally outperform other types of features when there is no domain shift and the new task is closely related to the one used to train the model. However, when a large domain or task shift is present, the Fisher-Vector shallow features generalize better and often obtain the best results.
On a different direction, some more recent works @cite_11 @cite_8 @cite_13 @cite_30 have drawn inspiration from representations typically used for natural images, and have shown that popular natural image representations such as the (BoV) @cite_40 or the @cite_31 built on top of densely-extracted local descriptors such as SIFT @cite_48 or SURF @cite_7 lead to notable improvements. All the latter representations are in general task-agnostic. They get combined with the right algorithm, such as a classifier, or a clustering method, in order to produce the right prediction depending on the target application. These shallow features were shown to generalize very well across tasks @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_8", "@cite_48", "@cite_40", "@cite_31", "@cite_13", "@cite_11" ], "mid": [ "2234144652", "", "", "2151103935", "1625255723", "2147238549", "", "2083122709" ], "abstract": [ "The main focus of this chapter is document image classification and retrieval, where we analyse and compare different parameters for the run-length histogram and Fisher vector-based image representations. We do an exhaustive experimental study using different document image data sets, including the MARG benchmarks, two data sets built on customer data and the images from the patent image classification task of the CLEF-IP 2011. The aim of the study is to give guidelines on how to best choose the parameters such that the same features perform well on different tasks. As an example of such need, we describe the image-based patent retrieval tasks of CLEF-IP 2011, where we used the same image representation to predict the image type and retrieve relevant patents.", "", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information.", "Within the field of pattern classification, the Fisher kernel is a powerful framework which combines the strengths of generative and discriminative approaches. The idea is to characterize a signal with a gradient vector derived from a generative probability model and to subsequently feed this representation to a discriminative classifier. We propose to apply this framework to image categorization where the input signals are images and where the underlying generative model is a visual vocabulary: a Gaussian mixture model which approximates the distribution of low-level features in images. We show that Fisher kernels can actually be understood as an extension of the popular bag-of-visterms. Our approach demonstrates excellent performance on two challenging databases: an in-house database of 19 object scene categories and the recently released VOC 2006 database. It is also very practical: it has low computational needs both at training and test time and vocabularies trained on one set of categories can be applied to another set without any significant loss in performance.", "", "In this paper we present a method for the segmentation of continuous page streams into multipage documents and the simultaneous classification of the resulting documents. We first present an approach to combine the multiple pages of a document into a single feature vector that represents the whole document. Despite its simplicity and low computational cost, the proposed representation yields results comparable to more complex methods in multipage document classification tasks. We then exploit this representation in the context of page stream segmentation. The most plausible segmentation of a page stream into a sequence of multipage documents is obtained by optimizing a statistical model that represents the probability of each segmented multipage document belonging to a particular class. Experimental results are reported on a large sample of real administrative multipage documents." ] }
1603.01025
2291160084
Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits.
@cite_1 @cite_10 @cite_3 @cite_6 analyzed the effects of quantizing the trained weights for inference. For example, @cite_13 shows that convolutional layers in AlexNet @cite_8 can be encoded to as little as 5 bits without a significant accuracy penalty. There has also been recent work in training using low precision arithmetic. @cite_20 propose a stochastic rounding scheme to help train networks using 16-bit fixed-point. @cite_21 propose quantized back-propagation and ternary connect. This method reduces the number of floating-point multiplications by casting these operations into powers-of-two multiplies, which are easily realized with bitshifts in digital hardware. They apply this technique on MNIST and CIFAR10 with little loss in performance. However, their method does not completely eliminate all multiplications end-to-end. During test-time the network uses the learned full resolution weights for forward propagation. Training with reduced precision is motivated by the idea that high-precision gradient updates is unnecessary for the stochastic optimization of networks @cite_17 @cite_25 @cite_4 . In fact, there are some studies that show that gradient noise helps convergence. For example, @cite_14 empirically finds that gradient noise can also encourage faster exploration and annealing of optimization space, which can help network generalization performance.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_10", "@cite_21", "@cite_1", "@cite_6", "@cite_3", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2263490141", "", "2618530766", "", "2198190323", "2962735857", "", "", "2963674932", "", "2963374099", "2113651538" ], "abstract": [ "Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.", "", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "", "For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.", "Recurrent neural networks have shown excellent performance in many applications; however they require increased complexity in hardware or software based implementations. The hardware complexity can be much lowered by minimizing the word-length of weights and signals. This work analyzes the fixed-point performance of recurrent neural networks using a retrain based quantization method. The quantization sensitivity of each layer in RNNs is studied, and the overall fixed-point optimization results minimizing the capacity of weights while not sacrificing the performance are presented. A language model and a phoneme recognition examples are used.", "", "", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "", "Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of lowprecision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.", "This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of small-scale and large-scale learning problems. Small-scale learning problems are subject to the usual approximation-estimation tradeoff. Large-scale learning problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying optimization algorithms in non-trivial ways." ] }
1603.01025
2291160084
Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits.
There have been a few but significant advances in the development of specialized hardware of large networks. For example @cite_2 developed Field-Programmable Gate Arrays (FPGA) to perform real-time forward propagation. These groups have also performed a comprehensive study of classification performance and energy efficiency as function of resolution. @cite_24 have also explored the design of convolutions in the context of memory versus compute management under the RoofLine model. Other works focus on specialized, optimized kernels for general purpose GPUs @cite_18 .
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_2" ], "mid": [ "2094756095", "1667652561", "1968422655" ], "abstract": [ "Convolutional neural network (CNN) has been widely employed for image recognition because it can achieve high accuracy by emulating behavior of optic nerves in living creatures. Recently, rapid growth of modern applications based on deep learning algorithms has further improved research and implementations. Especially, various accelerators for deep CNN have been proposed based on FPGA platform because it has advantages of high performance, reconfigurability, and fast development round, etc. Although current FPGA accelerators have demonstrated better performance over generic processors, the accelerator design space has not been well exploited. One critical problem is that the computation throughput may not well match the memory bandwidth provided an FPGA platform. Consequently, existing approaches cannot achieve best performance due to under-utilization of either logic resource or memory bandwidth. At the same time, the increasing complexity and scalability of deep learning applications aggravate this problem. In order to overcome this problem, we propose an analytical design scheme using the roofline model. For any solution of a CNN design, we quantitatively analyze its computing throughput and required memory bandwidth using various optimization techniques, such as loop tiling and transformation. Then, with the help of rooine model, we can identify the solution with best performance and lowest FPGA resource requirement. As a case study, we implement a CNN accelerator on a VC707 FPGA board and compare it to previous approaches. Our implementation achieves a peak performance of 61.62 GFLOPS under 100MHz working frequency, which outperform previous approaches significantly.", "We present a library that provides optimized implementations for deep learning primitives. Deep learning workloads are computationally intensive, and optimizing the kernels of deep learning workloads is difficult and time-consuming. As parallel architectures evolve, kernels must be reoptimized for new processors, which makes maintaining codebases difficult over time. Similar issues have long been addressed in the HPC community by libraries such as the Basic Linear Algebra Subroutines (BLAS) [2]. However, there is no analogous library for deep learning. Without such a library, researchers implementing deep learning workloads on parallel processors must create and optimize their own implementations of the main computational kernels, and this work must be repeated as new parallel processors emerge. To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads. Our implementation contains routines for GPUs, and similarly to the BLAS library, could be implemented for other platforms. The library is easy to integrate into existing frameworks, and provides optimized performance and memory usage. For example, integrating cuDNN into Caffe, a popular framework for convolutional networks, improves performance by 36 on a standard model while also reducing memory consumption.", "In this paper we present a scalable hardware architecture to implement large-scale convolutional neural networks and state-of-the-art multi-layered artificial vision systems. This system is fully digital and is a modular vision engine with the goal of performing real-time detection, recognition and segmentation of mega-pixel images. We present a performance comparison between a software, FPGA and ASIC implementation that shows a speed up in custom hardware implementations." ] }
1603.00562
2951642328
We consider the problem of designing revenue-optimal auctions for selling two items and bidders' valuations are independent among bidders but negatively correlated among items. In this paper, we obtain the closed-form optimal auction for this setting, by directly addressing the two difficulties above. In particular, the first difficulty is that when pointwise maximizing virtual surplus under multi-dimensional feasibility (i.e., the Border feasibility), (1) neither the optimal interim allocation is trivially monotone in the virtual value, (2) nor the virtual value is monotone in the bidder's type. As a result, the optimal interim allocations resulting from virtual surplus maximization no longer guarantees BIC. To address (1), we prove a generalization of Border's theorem and show that optimal interim allocation is indeed monotone in the virtual value. To address (2), we adapt Myerson's ironing procedure to this setting by redefining the (ironed) virtual value as a function of the lowest utility point. The second difficulty, perhaps a more challenging one, is that the lowest utility type in general is no longer at the endpoints of the type interval. To address this difficulty, we show by construction that there exist an allocation rule and an induced lowest utility type such that they form a solution of the virtual surplus maximization and in the meanwhile guarantees IIR. In the single bidder case, the optimal auction consists of a randomized bundle menu and a deterministic bundle menu; while in the multiple bidder case, the optimal auction is a randomization between two extreme mechanisms. The optimal solutions of our setting can be implemented by a Bayesian IC and IR auction, however, perhaps surprisingly, the revenue of this auction cannot be achieved by any (dominant-strategy) IC and IR auction.
Negatively correlated valuations are not uncommon in the literature. For example, consider an instance of the well-known facility location game with two facilities @cite_18 : agents are interested in services from the two facilities, located at the endpoints of a street. Each agent's type is his her location on the street, between the two facilities. The cost (negative utility) to either facility, the same as in @cite_2 , is simply one's distance to that facility, so his overall costs (as well as valuations) sum up to a constant --- the length of the street.
{ "cite_N": [ "@cite_18", "@cite_2" ], "mid": [ "2154843747", "2108957189" ], "abstract": [ "We consider the problem of locating facilities in a metric space to serve a set of selfish agents. The cost of an agent is the distance between her own location and the nearest facility. The social cost is the total cost of the agents. We are interested in designing strategy-proof mechanisms without payment that have a small approximation ratio for social cost. A mechanism is a (possibly randomized) algorithm which maps the locations reported by the agents to the locations of the facilities. A mechanism is strategy-proof if no agent can benefit from misreporting her location in any configuration. This setting was first studied by Procaccia and Tennenholtz [21]. They focused on the facility game where agents and facilities are located on the real line. studied the mechanisms for the facility games in a general metric space [1]. However, they focused on the games with only one facility. In this paper, we study the two-facility game in a general metric space, which extends both previous models. We first prove an Ω(n) lower bound of the social cost approximation ratio for deterministic strategy-proof mechanisms. Our lower bound even holds for the line metric space. This significantly improves the previous constant lower bounds [21, 17]. Notice that there is a matching linear upper bound in the line metric space [21]. Next, we provide the first randomized strategy-proof mechanism with a constant approximation ratio of 4. Our mechanism works in general metric spaces. For randomized strategy-proof mechanisms, the previous best upper bound is O(n) which works only in the line metric space.", "The literature on algorithmic mechanism design is mostly concerned with game-theoretic versions of optimization problems to which standard economic money-based mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on payments. In this article, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are almost ubiquitous and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity. We present a case study in approximate mechanism design without money. In our basic setting, agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located and a domain where each agent controls multiple locations." ] }
1603.00562
2951642328
We consider the problem of designing revenue-optimal auctions for selling two items and bidders' valuations are independent among bidders but negatively correlated among items. In this paper, we obtain the closed-form optimal auction for this setting, by directly addressing the two difficulties above. In particular, the first difficulty is that when pointwise maximizing virtual surplus under multi-dimensional feasibility (i.e., the Border feasibility), (1) neither the optimal interim allocation is trivially monotone in the virtual value, (2) nor the virtual value is monotone in the bidder's type. As a result, the optimal interim allocations resulting from virtual surplus maximization no longer guarantees BIC. To address (1), we prove a generalization of Border's theorem and show that optimal interim allocation is indeed monotone in the virtual value. To address (2), we adapt Myerson's ironing procedure to this setting by redefining the (ironed) virtual value as a function of the lowest utility point. The second difficulty, perhaps a more challenging one, is that the lowest utility type in general is no longer at the endpoints of the type interval. To address this difficulty, we show by construction that there exist an allocation rule and an induced lowest utility type such that they form a solution of the virtual surplus maximization and in the meanwhile guarantees IIR. In the single bidder case, the optimal auction consists of a randomized bundle menu and a deterministic bundle menu; while in the multiple bidder case, the optimal auction is a randomization between two extreme mechanisms. The optimal solutions of our setting can be implemented by a Bayesian IC and IR auction, however, perhaps surprisingly, the revenue of this auction cannot be achieved by any (dominant-strategy) IC and IR auction.
@cite_7 considers a setting where the bidders valuations are positively correlated between the items and are both weakly increasing with respect to a one-dimensional type. Levin shows that, under regularity condition, this case can be solved using Myerson's very method. Our work can be seen as a complement of Levin's setting to the negative correlation setting.
{ "cite_N": [ "@cite_7" ], "mid": [ "1966404657" ], "abstract": [ "Abstract This paper considers the optimal selling mechanism for complementary items. When buyers are perfectly symmetric, the optimal procedure is to bundle the items and run a standard auction. In general, however, bundling the items is not necessarily desirable, and the standard auctions do not maximize revenue. Moreover, the optimal auction allocation may not be socially efficient since the auction must discriminate against bidders who have strong incentives to misrepresent their true preferences. Journal of Economic Literature Classification Number: D44." ] }
1603.00562
2951642328
We consider the problem of designing revenue-optimal auctions for selling two items and bidders' valuations are independent among bidders but negatively correlated among items. In this paper, we obtain the closed-form optimal auction for this setting, by directly addressing the two difficulties above. In particular, the first difficulty is that when pointwise maximizing virtual surplus under multi-dimensional feasibility (i.e., the Border feasibility), (1) neither the optimal interim allocation is trivially monotone in the virtual value, (2) nor the virtual value is monotone in the bidder's type. As a result, the optimal interim allocations resulting from virtual surplus maximization no longer guarantees BIC. To address (1), we prove a generalization of Border's theorem and show that optimal interim allocation is indeed monotone in the virtual value. To address (2), we adapt Myerson's ironing procedure to this setting by redefining the (ironed) virtual value as a function of the lowest utility point. The second difficulty, perhaps a more challenging one, is that the lowest utility type in general is no longer at the endpoints of the type interval. To address this difficulty, we show by construction that there exist an allocation rule and an induced lowest utility type such that they form a solution of the virtual surplus maximization and in the meanwhile guarantees IIR. In the single bidder case, the optimal auction consists of a randomized bundle menu and a deterministic bundle menu; while in the multiple bidder case, the optimal auction is a randomization between two extreme mechanisms. The optimal solutions of our setting can be implemented by a Bayesian IC and IR auction, however, perhaps surprisingly, the revenue of this auction cannot be achieved by any (dominant-strategy) IC and IR auction.
At a higher level, our work is within the agenda of multidimensional revenue maximization. In particular, our work is in the direction of exactly optimal mechanism design @cite_14 @cite_10 @cite_27 @cite_1 , orthogonal to those aiming for approximate optimality @cite_8 @cite_4 @cite_23 @cite_25 @cite_24 @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_1", "@cite_24", "@cite_27", "@cite_23", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2275043201", "203000141", "2118442327", "2133106342", "2088970293", "2038821244", "2169604085", "", "1718157382", "1495768673" ], "abstract": [ "Optimal mechanisms have been provided in quite general multi-item settings [ 2012b, as long as each bidder's type distribution is given explicitly by listing every type in the support along with its associated probability. In the implicit setting, e.g. when the bidders have additive valuations with independent and or continuous values for the items, these results do not apply, and it was recently shown that exact revenue optimization is intractable, even when there is only one bidder [ 2013]. Even for item distributions with special structure, optimal mechanisms have been surprisingly rare [Manelli and Vincent 2006] and the problem is challenging even in the two-item case [Hart and Nisan 2012]. In this paper, we provide a framework for designing optimal mechanisms using optimal transport theory and duality theory. We instantiate our framework to obtain conditions under which only pricing the grand bundle is optimal in multi-item settings (complementing the work of [Manelli and Vincent 2006]), as well as to characterize optimal two-item mechanisms. We use our results to derive closed-form descriptions of the optimal mechanism in several two-item settings, exhibiting also a setting where a continuum of lotteries is necessary for revenue optimization but a closed-form representation of the mechanism can still be found efficiently using our framework.", "The VCG mechanism is the gold standard for combinatorial auctions (CAs), and it maximizes social welfare. In contrast, the revenue-maximizing (aka optimal) CA is unknown, and designing one is NP-hard. Therefore, research on optimal CAs has progressed into special settings. Notably, Levin [1997] derived the optimal CA for complements when each agent's private type is one-dimensional. (This does not fall inside the well-studied \"single-parameter environment\".) We introduce a new research avenue for increasing revenue where we poke holes in the allocation space--based on the bids--and then use a welfare-maximizing allocation rule within the remaining allocation set. In this paper, the first step down this avenue, we introduce a new form of \"reserve pricing\" into CAs. We show that Levin's optimal revenue can be 2-approximated by using \"monopoly reserve prices\" to curtail the allocation set, followed by welfare-maximizing allocation and Levin's payment rule. A key lemma of potential independent interest is that the expected revenue from any truthful allocation-monotonic mechanism equals the expected virtual valuation; this generalizes Myerson's lemma [1981] from the single-parameter environment. Our mechanism is close to the gold standard and thus easier to adopt than Levin's. It also requires less information about the prior over the bidders' types, and is always more efficient. Finally, we show that the optimal revenue can be 6- approximated even if the \"reserve pricing\" is required to be symmetric across bidders.", "The monopolist's theory of optimal single-item auctions for agents with independent private values can be summarized by two statements. The first is from Myerson [8]: the optimal auction is Vickrey with a reserve price. The second is from Bulow and Klemperer [1]: it is better to recruit one more bidder and run the Vickrey auction than to run the optimal auction. These results hold for single-item auctions under the assumption that the agents' valuations are independently and identically drawn from a distribution that satisfies a natural (and prevalent) regularity condition. These fundamental guarantees for the Vickrey auction fail to hold in general single-parameter agent mechanism design problems. We give precise (and weak) conditions under which approximate analogs of these two results hold, thereby demonstrating that simple mechanisms remain almost optimal in quite general single-parameter agent settings.", "Optimal mechanisms for agents with multi-dimensional preferences are generally complex. This complexity makes them challenging to solve for and impractical to run. In a typical mechanism design approach, a model is posited and then the optimal mechanism is designed for the model. Successful mechanism design gives mechanisms that one could at least imagine running. By this measure, multi-dimensional mechanism design has had only limited success. In this paper we take the opposite approach, which we term reverse mechanism design. We start by hypothesizing the optimality of a particular form of mechanism that is simple and reasonable to run, then we solve for sufficient conditions for the mechanism to be optimal (among all mechanisms). This paper has two main contributions. The first is in codifying the method of virtual values from single-dimensional auction theory and extending it to agents with multidimensional preferences. The second is in applying this method to two paradigmatic classes of multi-dimensional preferences. The first class is unit-demand preferences (e.g., a homebuyer who wishes to buy at most one house); for this class we give sufficient conditions under which posting a uniform price for each item is optimal. This result generalizes one of [2013] for a consumer with values uniform on interval [0; 1], and contrasts with an example of Thanassoulis [2004] for a consumer with values uniform on interval [5; 6] where uniform pricing is not optimal. The second class is additive preferences, for this class we give sufficient conditions under which posting a price for the grand bundle is optimal. This result generalizes a recent result of Hart and Nisan [2012] and relates to work of Armstrong [1999]. Similarly to an approach of [2013], these results for single-agent pricing problems can be generalized naturally to multi-agent auction problems.", "Abstract Consider the revenue-maximizing problem in which a single seller wants to sell k different items to a single buyer, who has independently distributed values for the items with additive valuation. The case was completely resolved by Myerson’s classical work in 1981, whereas for larger k the problem has been the subject of much research efforts ever since. Recently, Hart and Nisan analyzed two simple mechanisms: selling the items separately, or selling them as a single bundle. They showed that selling separately guarantees at least a fraction of the optimal revenue; and for identically distributed items, bundling yields at least a fraction of the optimal revenue. In this paper, we prove that selling separately guarantees at least fraction of the optimal revenue, whereas for identically distributed items, bundling yields at least a constant fraction of the optimal revenue. These bounds are tight (up to a constant factor), settling the open questions raised by Hart and Nisan. The results are valid for arbitrary probability distributions without restrictions. Our results also have implications on other interesting issues, such as monotonicity and randomization of selling mechanisms.", "We solve the optimal multi-dimensional mechanism design problem when either the number of bidders is a constant or the number of items is a constant. In the first setting, we need that the values of each bidder for the items are i.i.d., but allow different distributions for each bidder. In the second setting, we allow the values of each bidder for the items to be arbitrarily correlated, but assume that the bidders are i.i.d. For all e > 0, we obtain an efficient additive e-approximation, when the value distributions are bounded, or a multiplicative (1--e)-approximation when the value distributions are unbounded, but satisfy the Monotone Hazard Rate condition. When there is a single bidder, we generalize these results to independent but not necessarily identically distributed value distributions, and to independent regular distributions.", "Myerson's classic result provides a full description of how a seller can maximize revenue when selling a single item. We address the question of revenue maximization in the simplest possible multi-item setting: two items and a single buyer who has independently distributed values for the items, and an additive valuation. In general, the revenue achievable from selling two independent items may be strictly higher than the sum of the revenues obtainable by selling each of them separately. In fact, the structure of optimal (i.e., revenue-maximizing) mechanisms for two items even in this simple setting is not understood. In this paper we obtain approximate revenue optimization results using two simple auctions: that of selling the items separately, and that of selling them as a single bundle. Our main results (which are of a \"direct sum\" variety, and apply to any distributions) are as follows. Selling the items separately guarantees at least half the revenue of the optimal auction; for identically distributed items, this becomes at least 73 of the optimal revenue. For the case of k > 2 items, we show that selling separately guarantees at least a c log2(k) fraction of the optimal revenue; for identically distributed items, the bundling auction yields at least a c log(k) fraction of the optimal revenue.", "", "Revenue maximization in multi-item settings is notoriously elusive. This paper studies a class of two-item auctions which we call a mixed-bundling auction with reserve prices (MBARP). It calls VCG on an enlarged set of agents by adding the seller---who has reserve valuations for each bundle of items---and a fake agent who receives nothing nor has valuations for any item or bundle, but has a valuation for pure bundling allocations, i.e., allocations where the two items are allocated to a single agent. This is a strict subclass of several known classes of auctions, including the affine maximizer auction (AMA), λ-aution, and the virtual valuations combinatorial auction (VVCA). As we show, a striking feature of MBARP is that its revenue can be represented in a simple closed form as a function of the parameters. Thus, we can solve first-order conditions on the parameters and obtain the optimal MBARP. The optimal MBARP yields significantly higher revenue than prior auctions for which the revenue-maximizing parameters could be solved for in closed form: separate Myerson auctions, pure-bundling Myerson auction, VCG, and mixed-bundling auction without reserve prices. Its revenue even exceeds that obtained via simulation within broader classes: VVCA and AMA.", "In this paper, we introduce a novel approach for reducing the k-item n-bidder auction with additive valuation to k-item 1-bidder auctions. This approach, called the Best-Guess reduction, can be applied to address several central questions in optimal revenue auction theory such as the relative strength of simple versus complex mechanisms, the power of randomization, and Bayesian versus dominant-strategy implementations. First, when the items have independent valuation distributions, we present a deterministic mechanism called Deterministic Best-Guess that yields at least a constant fraction of the optimal revenue by any randomized mechanism. This also gives the first simple mechanism that achieves constant fraction optimal revenue for such multi-buyer multi-item auctions. Second, if all the nk valuation random variables are independent, the optimal revenue achievable in dominant strategy incentive compatibility (DSIC) is shown to be at least a constant fraction of that achievable in Bayesian incentive compatibility (BIC). Third, when all the nk values are identically distributed according to a common one-dimensional distribution F, the optimal revenue is shown to be expressible in the closed form Θ(k(r + ∫0mr(1 − F(x)n)dx)) where r = supx≥0 x(1 − F(x)n) and m = lk nr; this revenue is achievable by a simple mechanism called 2nd-Price Bundling. All our results apply to arbitrary distributions, regular or irregular." ] }
1603.00562
2951642328
We consider the problem of designing revenue-optimal auctions for selling two items and bidders' valuations are independent among bidders but negatively correlated among items. In this paper, we obtain the closed-form optimal auction for this setting, by directly addressing the two difficulties above. In particular, the first difficulty is that when pointwise maximizing virtual surplus under multi-dimensional feasibility (i.e., the Border feasibility), (1) neither the optimal interim allocation is trivially monotone in the virtual value, (2) nor the virtual value is monotone in the bidder's type. As a result, the optimal interim allocations resulting from virtual surplus maximization no longer guarantees BIC. To address (1), we prove a generalization of Border's theorem and show that optimal interim allocation is indeed monotone in the virtual value. To address (2), we adapt Myerson's ironing procedure to this setting by redefining the (ironed) virtual value as a function of the lowest utility point. The second difficulty, perhaps a more challenging one, is that the lowest utility type in general is no longer at the endpoints of the type interval. To address this difficulty, we show by construction that there exist an allocation rule and an induced lowest utility type such that they form a solution of the virtual surplus maximization and in the meanwhile guarantees IIR. In the single bidder case, the optimal auction consists of a randomized bundle menu and a deterministic bundle menu; while in the multiple bidder case, the optimal auction is a randomization between two extreme mechanisms. The optimal solutions of our setting can be implemented by a Bayesian IC and IR auction, however, perhaps surprisingly, the revenue of this auction cannot be achieved by any (dominant-strategy) IC and IR auction.
There are a number of papers that consider the equivalence between BIC and DIC with respect to the objective of social welfare @cite_16 @cite_22 . For the objective of revenue maximization, @cite_26 proves that in the independent private values model with linear utility, the outcome in terms of interim allocation rule and interim payment rule of any BIC mechanism, can also be obtained with a DIC mechanism (without any restriction on IR). Since the interim allocation rule and payment rule are same under the two IC notions, their work actually implies that any interim IR and BIC auction can be implemented in an interim IR and DIC auction. However, do notice that their result does not imply the equivalence between ex post IR and BIC mechanisms and ex post IR and DIC mechanisms, because equivalence in interim utilities does not imply equivalence in ex post utilities under the two notions of IC.
{ "cite_N": [ "@cite_16", "@cite_22", "@cite_26" ], "mid": [ "2950082028", "2067758411", "1488995906" ], "abstract": [ "Border's theorem gives an intuitive linear characterization of the feasible interim allocation rules of a Bayesian single-item environment, and it has several applications in economic and algorithmic mechanism design. All known generalizations of Border's theorem either restrict attention to relatively simple settings, or resort to approximation. This paper identifies a complexity-theoretic barrier that indicates, assuming standard complexity class separations, that Border's theorem cannot be extended significantly beyond the state-of-the-art. We also identify a surprisingly tight connection between Myerson's optimal auction theory, when applied to public project settings, and some fundamental results in the analysis of Boolean functions.", "The authors consider auctions for a single indivisible object when bidders have information about each other that is unavailable to the seller. They show that the seller can use this information to his own benefit, and they characterize th e environments in which a well-chosen auction gives him the same expected payoff as that obtainable were he able to see the object und er full information. This hinges on the possibility of constructing lotteries with the correct properties. The authors study the problem for auctions where the bidders have dominant strategies and those where the relevant equilibrium concept is Bayesian-Nash. Copyright 1988 by The Econometric Society.", "We prove—in the standard independent private-values model—that the outcome, in terms of interim expected probabilities of trade and interim expected transfers, of any Bayesian mechanism can also be obtained with a dominant-strategy mechanism." ] }
1603.00562
2951642328
We consider the problem of designing revenue-optimal auctions for selling two items and bidders' valuations are independent among bidders but negatively correlated among items. In this paper, we obtain the closed-form optimal auction for this setting, by directly addressing the two difficulties above. In particular, the first difficulty is that when pointwise maximizing virtual surplus under multi-dimensional feasibility (i.e., the Border feasibility), (1) neither the optimal interim allocation is trivially monotone in the virtual value, (2) nor the virtual value is monotone in the bidder's type. As a result, the optimal interim allocations resulting from virtual surplus maximization no longer guarantees BIC. To address (1), we prove a generalization of Border's theorem and show that optimal interim allocation is indeed monotone in the virtual value. To address (2), we adapt Myerson's ironing procedure to this setting by redefining the (ironed) virtual value as a function of the lowest utility point. The second difficulty, perhaps a more challenging one, is that the lowest utility type in general is no longer at the endpoints of the type interval. To address this difficulty, we show by construction that there exist an allocation rule and an induced lowest utility type such that they form a solution of the virtual surplus maximization and in the meanwhile guarantees IIR. In the single bidder case, the optimal auction consists of a randomized bundle menu and a deterministic bundle menu; while in the multiple bidder case, the optimal auction is a randomization between two extreme mechanisms. The optimal solutions of our setting can be implemented by a Bayesian IC and IR auction, however, perhaps surprisingly, the revenue of this auction cannot be achieved by any (dominant-strategy) IC and IR auction.
@cite_20 extend the above work to general multi-dimensional, possibly non-linear settings. They provide a setting where, under a restricted definition of mechanism, the optimal DIC mechanism'' produces strictly less revenues than the optimal BIC mechanism''.
{ "cite_N": [ "@cite_20" ], "mid": [ "1719407338" ], "abstract": [ "We consider a standard social choice environment with linear utilities and independent, one-dimensional, private types. We prove that for any Bayesian incentive compatible mechanism there exists an equivalent dominant strategy incentive compatible mechanism that delivers the same interim expected utilities for all agents and the same ex ante expected social surplus. The short proof is based on an extension of an elegant result due to Gutmann, Kemperman, Reeds, and Shepp (1991). We also show that the equivalence between Bayesian and dominant strategy implementation generally breaks down when the main assumptions underlying the social choice model are relaxed or when the equivalence concept is strengthened to apply to interim expected allocations." ] }
1603.00656
2288533078
The software of robotic assistants needs to be verified, to ensure its safety and functional correctness. Testing in simulation allows a high degree of realism in the verification. However, generating tests that cover both interesting foreseen and unforeseen scenarios in human-robot interaction (HRI) tasks, while executing most of the code, remains a challenge. We propose the use of belief-desire-intention (BDI) agents in the test environment, to increase the level of realism and human-like stimulation of simulated robots. Artificial intelligence, such as agent theory, can be exploited for more intelligent test generation. An automated testbench was implemented for a simulation in Robot Operating System (ROS) and Gazebo, of a cooperative table assembly task between a humanoid robot and a person. Requirements were verified for this task, and some unexpected design issues were discovered, leading to possible code improvements. Our results highlight the practicality of BDI agents to automatically generate valid and human-like tests to get high code coverage, compared to hand-written directed tests, pseudorandom generation, and other variants of model-based test generation. Also, BDI agents allow the coverage of combined behaviours of the HRI system with more ease than writing temporal logic properties for model checking.
Testing of robotic systems can be performed in a real-life setting @cite_1 , completely in simulation @cite_9 , or in combinations of simulation and real components @cite_4 , i.e., hardware-in-the-loop. Our BDI approach offers a novel solution for the two latter cases. In our approach, we explore the code mainly for finding and eliminating functional bugs, i.e., for safety and functional soundness, although runtime bugs can be found by instrumenting the code with relevant assertion monitors.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_4" ], "mid": [ "308585474", "2145257696", "1774952121" ], "abstract": [ "In robotics, a reliable simulation tool is an important design and test resource because the performance of algorithms is evaluated before being implemented in real mobile robots. The virtual environment makes it possible to conduct extensive experiments in controlled scenarios, without the dependence of a physical platform, in a faster and inexpensive way. Although, simulators should be able to represent all the relevant characteristics that are present in the real environment, like dynamic (shape, mass, surface friction, etc.), impact simulation, realistic noise, among other factors, in order to guarantee the accuracy and reliability of the results.", "Abstract Context Testing complex industrial robots (CIRs) requires testing several interacting control systems. This is challenging, especially for robots performing process-intensive tasks such as painting or gluing, since their dedicated process control systems can be loosely coupled with the robot’s motion control. Objective Current practices for validating CIRs involve manual test case design and execution. To reduce testing costs and improve quality assurance, a trend is to automate the generation of test cases. Our work aims to define a cost-effective automated testing technique to validate CIR control systems in an industrial context. Method This paper reports on a methodology, developed at ABB Robotics in collaboration with SIMULA, for the fully automated testing of CIRs control systems. Our approach draws on continuous integration principles and well-established constraint-based testing techniques. It is based on a novel constraint-based model for automatically generating test sequences where test sequences are both generated and executed as part of a continuous integration process. Results By performing a detailed analysis of experimental results over a simplified version of our constraint model, we determine the most appropriate parameterization of the operational version of the constraint model. This version is now being deployed at ABB Robotics’s CIR testing facilities and used on a permanent basis. This paper presents the empirical results obtained when automatically generating test sequences for CIRs at ABB Robotics. In a real industrial setting, the results show that our methodology is not only able to detect reintroduced known faults, but also to spot completely new faults. Conclusion Our empirical evaluation shows that constraint-based testing is appropriate for automatically generating test sequences for CIRs and can be faithfully deployed in an industrial context.", "Developing control software for teams of autonomous mobile robots is a challenging task, which can be facilitated using frameworks with ready to use components. But testing and debugging the resulting system as teached in modern software engineering to be free of errors and tolerant to sensor noise in a real world scenario is to a large extend beyond the scope of current approaches. In this paper multilevel testing strategies using the developed frameworks RoboFrame and MuRoSimF are presented. Testing incorporating automated tests, online and offline analysis and software-in-the-loop (SIL) tests in combination with real robot hardware or an adequate simulation are highly facilitated by the two frameworks. Thus the efficiency of validation of complex real world applications is improved. In this way potential errors can be identified early in the development process and error situations in real world operations can be reduced significantly." ] }
1603.00656
2288533078
The software of robotic assistants needs to be verified, to ensure its safety and functional correctness. Testing in simulation allows a high degree of realism in the verification. However, generating tests that cover both interesting foreseen and unforeseen scenarios in human-robot interaction (HRI) tasks, while executing most of the code, remains a challenge. We propose the use of belief-desire-intention (BDI) agents in the test environment, to increase the level of realism and human-like stimulation of simulated robots. Artificial intelligence, such as agent theory, can be exploited for more intelligent test generation. An automated testbench was implemented for a simulation in Robot Operating System (ROS) and Gazebo, of a cooperative table assembly task between a humanoid robot and a person. Requirements were verified for this task, and some unexpected design issues were discovered, leading to possible code improvements. Our results highlight the practicality of BDI agents to automatically generate valid and human-like tests to get high code coverage, compared to hand-written directed tests, pseudorandom generation, and other variants of model-based test generation. Also, BDI agents allow the coverage of combined behaviours of the HRI system with more ease than writing temporal logic properties for model checking.
Test generation research has focused on applications where the tests have relatively small sets of data types, e.g., a timing sequence for controllers @cite_1 , producing images to verify image processing software http: development.objectvideo.com index.html , or a set of state space inputs for a controller @cite_24 . In our approach, the inputs to the simulator become combinations of these and several different types. Thus, our generation problem is much more complex and that is why we used the two tiered approach, from abstract to concrete tests.
{ "cite_N": [ "@cite_24", "@cite_1" ], "mid": [ "2082784420", "2145257696" ], "abstract": [ "The problem of testing complex reactive control systems and validating the effectiveness of multi-agent controllers is addressed. Testing and validation involve searching for conditions that lead to system failure by exploring all adversarial inputs and disturbances for errant trajectories. This problem of testing is related to motion planning. In both cases, there is a goal or specification set consisting of a set of points in state space that is of interest, either for finding a plan, demonstrating failure or for validation. Unlike motion planning problems, the problem of testing generally involves systems that are not controllable with respect to disturbances or adversarial inputs and therefore, the reachable set of states is a small subset of the entire state space. In this work, sampling-based algorithms based on the Rapidly-exploring Random Trees (RRT) algorithm are applied to the testing and validation problem. First, some of the factors that govern the exploration rate of the RRT algorithm are analysed, this analysis serving to motivate some enhancements. Then, three modifications to the original RRT algorithm are proposed, suited for use on uncontrollable systems. First, a new distance function is introduced which incorporates information about the system's dynamics to select nodes for extension. Second, a weighting is introduced to penalize nodes which are repeatedly selected but fail to extend.Third, a scheme for adaptively modifying the sampling probability distribution is proposed, based on tree growth. Application of the algorithm is demonstrated using several examples, and computational statistics are provided to illustrate the effect of each modification. The final algorithm is demonstrated on a 25 state example and results in nearly an order of magnitude reduction in computation time when compared with the traditional RRT. The proposed algorithms are also applicable to motion planning for systems that are not small time locally controllable.", "Abstract Context Testing complex industrial robots (CIRs) requires testing several interacting control systems. This is challenging, especially for robots performing process-intensive tasks such as painting or gluing, since their dedicated process control systems can be loosely coupled with the robot’s motion control. Objective Current practices for validating CIRs involve manual test case design and execution. To reduce testing costs and improve quality assurance, a trend is to automate the generation of test cases. Our work aims to define a cost-effective automated testing technique to validate CIR control systems in an industrial context. Method This paper reports on a methodology, developed at ABB Robotics in collaboration with SIMULA, for the fully automated testing of CIRs control systems. Our approach draws on continuous integration principles and well-established constraint-based testing techniques. It is based on a novel constraint-based model for automatically generating test sequences where test sequences are both generated and executed as part of a continuous integration process. Results By performing a detailed analysis of experimental results over a simplified version of our constraint model, we determine the most appropriate parameterization of the operational version of the constraint model. This version is now being deployed at ABB Robotics’s CIR testing facilities and used on a permanent basis. This paper presents the empirical results obtained when automatically generating test sequences for CIRs at ABB Robotics. In a real industrial setting, the results show that our methodology is not only able to detect reintroduced known faults, but also to spot completely new faults. Conclusion Our empirical evaluation shows that constraint-based testing is appropriate for automatically generating test sequences for CIRs and can be faithfully deployed in an industrial context." ] }
1603.00656
2288533078
The software of robotic assistants needs to be verified, to ensure its safety and functional correctness. Testing in simulation allows a high degree of realism in the verification. However, generating tests that cover both interesting foreseen and unforeseen scenarios in human-robot interaction (HRI) tasks, while executing most of the code, remains a challenge. We propose the use of belief-desire-intention (BDI) agents in the test environment, to increase the level of realism and human-like stimulation of simulated robots. Artificial intelligence, such as agent theory, can be exploited for more intelligent test generation. An automated testbench was implemented for a simulation in Robot Operating System (ROS) and Gazebo, of a cooperative table assembly task between a humanoid robot and a person. Requirements were verified for this task, and some unexpected design issues were discovered, leading to possible code improvements. Our results highlight the practicality of BDI agents to automatically generate valid and human-like tests to get high code coverage, compared to hand-written directed tests, pseudorandom generation, and other variants of model-based test generation. Also, BDI agents allow the coverage of combined behaviours of the HRI system with more ease than writing temporal logic properties for model checking.
Constraint solving requires mathematical models of the inputs to the system (code) to stimulate, in the form of constraint programs or optimization programs to solve @cite_1 . Some heuristics are needed to help the solvers, e.g., orders of variables. Search methods are an alternative to solve constraint programs or optimization problems @cite_16 @cite_24 @cite_20 @cite_29 . Nevertheless, heuristics to guide the search are needed, e.g., cost functions. Hybrid systems approaches require the formulation of the test generation problem into a hybrid model (e.g., hybrid automata) @cite_7 @cite_20 , which means a great deal of abstraction and manual input in practice.
{ "cite_N": [ "@cite_7", "@cite_29", "@cite_1", "@cite_24", "@cite_16", "@cite_20" ], "mid": [ "1508536183", "", "2145257696", "2082784420", "2114869486", "2080932760" ], "abstract": [ "Testing is an important tool for validation of the system design and its implementation. Model-based test generation allows to systematically ascertain whether the system meets its design requirements, particularly the safety and correctness requirements of the system. In this paper, we develop a framework for generating tests from hybrid systems' models. The core idea of the framework is to develop a notion of robust test, where one nominal test can be guaranteed to yield the same qualitative behavior with any other test that is close to it. Our approach offers three distinct advantages. 1) It allows for computing and formally quantifying the robustness of some properties, 2) it establishes a method to quantify the test coverage for every test case, and 3) the procedure is parallelizable and therefore, very scalable. We demonstrate our framework by generating tests for a navigation benchmark application.", "", "Abstract Context Testing complex industrial robots (CIRs) requires testing several interacting control systems. This is challenging, especially for robots performing process-intensive tasks such as painting or gluing, since their dedicated process control systems can be loosely coupled with the robot’s motion control. Objective Current practices for validating CIRs involve manual test case design and execution. To reduce testing costs and improve quality assurance, a trend is to automate the generation of test cases. Our work aims to define a cost-effective automated testing technique to validate CIR control systems in an industrial context. Method This paper reports on a methodology, developed at ABB Robotics in collaboration with SIMULA, for the fully automated testing of CIRs control systems. Our approach draws on continuous integration principles and well-established constraint-based testing techniques. It is based on a novel constraint-based model for automatically generating test sequences where test sequences are both generated and executed as part of a continuous integration process. Results By performing a detailed analysis of experimental results over a simplified version of our constraint model, we determine the most appropriate parameterization of the operational version of the constraint model. This version is now being deployed at ABB Robotics’s CIR testing facilities and used on a permanent basis. This paper presents the empirical results obtained when automatically generating test sequences for CIRs at ABB Robotics. In a real industrial setting, the results show that our methodology is not only able to detect reintroduced known faults, but also to spot completely new faults. Conclusion Our empirical evaluation shows that constraint-based testing is appropriate for automatically generating test sequences for CIRs and can be faithfully deployed in an industrial context.", "The problem of testing complex reactive control systems and validating the effectiveness of multi-agent controllers is addressed. Testing and validation involve searching for conditions that lead to system failure by exploring all adversarial inputs and disturbances for errant trajectories. This problem of testing is related to motion planning. In both cases, there is a goal or specification set consisting of a set of points in state space that is of interest, either for finding a plan, demonstrating failure or for validation. Unlike motion planning problems, the problem of testing generally involves systems that are not controllable with respect to disturbances or adversarial inputs and therefore, the reachable set of states is a small subset of the entire state space. In this work, sampling-based algorithms based on the Rapidly-exploring Random Trees (RRT) algorithm are applied to the testing and validation problem. First, some of the factors that govern the exploration rate of the RRT algorithm are analysed, this analysis serving to motivate some enhancements. Then, three modifications to the original RRT algorithm are proposed, suited for use on uncontrollable systems. First, a new distance function is introduced which incorporates information about the system's dynamics to select nodes for extension. Second, a weighting is introduced to penalize nodes which are repeatedly selected but fail to extend.Third, a scheme for adaptively modifying the sampling probability distribution is proposed, based on tree growth. Application of the algorithm is demonstrated using several examples, and computational statistics are provided to illustrate the effect of each modification. The final algorithm is demonstrated on a 25 state example and results in nearly an order of magnitude reduction in computation time when compared with the traditional RRT. The proposed algorithms are also applicable to motion planning for systems that are not small time locally controllable.", "The use of metaheuristic search techniques for the automatic generation of test data has been a burgeoning interest for many researchers in recent years. Previous attempts to automate the test generation process have been limited, having been constrained by the size and complexity of software, and the basic fact that in general, test data generation is an undecidable problem. Metaheuristic search techniques oer much promise in regard to these problems. Metaheuristic search techniques are highlevel frameworks, which utilise heuristics to seek solutions for combinatorial problems at a reasonable computational cost. To date, metaheuristic search techniques have been applied to automate test data generation for structural and functional testing; the testing of grey-box properties, for example safety constraints; and also non-functional properties, such as worst-case execution time. This paper surveys some of the work undertaken in this eld, discussing possible new future directions of research for each of its dieren t individual areas.", "In this paper, we describe a formal framework for conformance testing of continuous and hybrid systems, using the international standard Formal Methods in Conformance Testing' FMCT. We propose a novel test coverage measure for these systems, which is defined using the star discrepancy notion. This coverage measure is used to quantify the validation completeness'. It is also used to guide input stimulus generation by identifying the portions of the system behaviors that are not adequately examined. We then propose a test generation method, which is based on a robotic motion planning algorithm and is guided by the coverage measure. This method was implemented in a prototype tool that can handle high dimensional systems (up to 100 dimensions)." ] }
1603.00656
2288533078
The software of robotic assistants needs to be verified, to ensure its safety and functional correctness. Testing in simulation allows a high degree of realism in the verification. However, generating tests that cover both interesting foreseen and unforeseen scenarios in human-robot interaction (HRI) tasks, while executing most of the code, remains a challenge. We propose the use of belief-desire-intention (BDI) agents in the test environment, to increase the level of realism and human-like stimulation of simulated robots. Artificial intelligence, such as agent theory, can be exploited for more intelligent test generation. An automated testbench was implemented for a simulation in Robot Operating System (ROS) and Gazebo, of a cooperative table assembly task between a humanoid robot and a person. Requirements were verified for this task, and some unexpected design issues were discovered, leading to possible code improvements. Our results highlight the practicality of BDI agents to automatically generate valid and human-like tests to get high code coverage, compared to hand-written directed tests, pseudorandom generation, and other variants of model-based test generation. Also, BDI agents allow the coverage of combined behaviours of the HRI system with more ease than writing temporal logic properties for model checking.
Other model-based approaches seek to test models at the same level of abstraction as the model-based test generation @cite_5 , or they focus on testing high-level functionality @cite_23 @cite_28 @cite_13 . Thus, test generation is much easier to implement than for our testing problem, which targets the real robotics code in HRI realistic scenarios. Our model-based test generation approach is based on divide-and-conquer, simplifying the constraint solving or search problem. This abstract-concrete process has been proposed for the synthesis of hybrid controllers that satisfy properties @cite_17 @cite_27 .
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_23", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "", "2112398472", "1594775721", "2107360681", "", "2134312936" ], "abstract": [ "", "This paper proposes a method for automatic generation of time-optimal robot motion trajectories for the task of collecting and moving a finite number of objects to particular spots in space, while maintaining predefined temporal logic constraints. The continuous robot dynamics change upon an object pick-up or drop-off. The temporal constraints are expressed as syntactically co-safe Linear Temporal Logic (scLTL) formulas over the set of object and drop-off sites. We propose an approach based on constructing a discrete abstraction of the hybrid system modeling the robot in the form of a finite weighted transition system. Then, by employing tools from automata-based model checking, we obtain an automaton containing only paths that satisfy the specification. The shortest path in this automaton is found by graph search and corresponds directly to the time-optimal hybrid trajectory. The method is applied to a case study with a mobile ground robot and a case study involving a quadrotor moving in an environment with obstacles, thus reflecting its computational advantage over a direct optimization approach.", "The use of autonomous systems, including cooperating agents, is indispensable in certain fields of application. Nevertheless, the verification of autonomous systems still represents a challenge due to lack of suitable modelling languages and verification techniques. To address these difficulties, different modelling languages allowing concurrency are compared. Coloured Petri Nets (CPNs) are further analysed and illustrated by means of an example modelling autonomous systems. Finally, some existing structural coverage concepts for Petri Nets are presented and extended by further criteria tailored to the characteristics of CPNs.", "This chapter presents principles and techniques for modelbased black-box conformance testing of real-time systems using the Uppaal model-checking tool-suite. The basis for testing is given as a network of concurrent timed automata specified by the test engineer. Relativized input output conformance serves as the notion of implementation correctness, essentially timed trace inclusion taking environment assumptions into account. Test cases can be generated offline and later executed, or they can be generated and executed online. For both approaches this chapter discusses how to specify test objectives, derive test sequences, apply these to the system under test, and assign a verdict.", "", "Robot motion planning algorithms have focused on low-level reachability goals taking into account robot kinematics, or on high level task planning while ignoring low-level dynamics. In this paper, we present an integrated approach to the design of closed–loop hybrid controllers that guarantee by construction that the resulting continuous robot trajectories satisfy sophisticated specifications expressed in the so–called Linear Temporal Logic. In addition, our framework ensures that the temporal logic specification is satisfied even in the presence of an adversary that may instantaneously reposition the robot within the environment a finite number of times. This is achieved by obtaining a Buchi automaton realization of the temporal logic specification, which supervises a finite family of continuous feedback controllers, ensuring consistency between the discrete plan and the continuous execution." ] }
1603.00656
2288533078
The software of robotic assistants needs to be verified, to ensure its safety and functional correctness. Testing in simulation allows a high degree of realism in the verification. However, generating tests that cover both interesting foreseen and unforeseen scenarios in human-robot interaction (HRI) tasks, while executing most of the code, remains a challenge. We propose the use of belief-desire-intention (BDI) agents in the test environment, to increase the level of realism and human-like stimulation of simulated robots. Artificial intelligence, such as agent theory, can be exploited for more intelligent test generation. An automated testbench was implemented for a simulation in Robot Operating System (ROS) and Gazebo, of a cooperative table assembly task between a humanoid robot and a person. Requirements were verified for this task, and some unexpected design issues were discovered, leading to possible code improvements. Our results highlight the practicality of BDI agents to automatically generate valid and human-like tests to get high code coverage, compared to hand-written directed tests, pseudorandom generation, and other variants of model-based test generation. Also, BDI agents allow the coverage of combined behaviours of the HRI system with more ease than writing temporal logic properties for model checking.
To the best of our knowledge, BDI agents have not been used as the modelling formalism for model-based test generation before. A multi-agent framework has been proposed in @cite_31 for model-based test generation in software testing. Agent programs are in charge of exploring a UML model of the code, generating all the scenarios of the if-then-else conditions and branches. BDI agents have been tested, with respect to the interaction behaviours of multi-agent systems @cite_30 , or single agents (units) in terms of the correctness of their beliefs (e.g., value combinations), plans (e.g., triggering the correct plan according to the context), and events or messages (e.g., sending them at the right time) @cite_12 @cite_3 @cite_26 . In this paper we turn the table and introduce BDI agents into the test environment, for intuitive and effective test generation.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_3", "@cite_31", "@cite_12" ], "mid": [ "2048949155", "1983537749", "2388593457", "2005083693", "2174334857" ], "abstract": [ "Autonomous agents perform on behalf of the user to achieve defined goals or objectives. They are situated in dynamic environment and are able to operate autonomously to achieve their goals. In a multiagent system, agents cooperate with each other to achieve a common goal. Testing of multiagent systems is a challenging task due to the autonomous and proactive behavior of agents. However, testing is required to build confidence into the working of a multiagent system. Prometheus methodology is a commonly used approach to design multiagents systems. Systematic and thorough testing of each interaction is necessary. This paper proposes a novel approach to testing of multiagent systems based on Prometheus design artifacts. In the proposed approach, different interactions between the agent and actors are considered to test the multiagent system. These interactions include percepts and actions along with messages between the agents which can be modeled in a protocol diagram. The protocol diagram is converted into a protocol graph, on which different coverage criteria are applied to generate test paths that cover interactions between the agents. A prototype tool has been developed to generate test paths from protocol graph according to the specified coverage criterion.", "Software testing remains the most widely used approach to verification in industry today, consuming between 30-50 percent of the entire development cost. Test input selection for intelligent agents presents a problem due to the very fact that the agents are intended to operate robustly under conditions which developers did not consider and would therefore be unlikely to test. Using methods to automatically generate and execute tests is one way to provide coverage of many conditions without significantly increasing cost. However, one problem using automatic generation and execution of tests is the oracle problem: How can we automatically decide if observed program behavior is correct with respect to its specification? In this paper, we present a model-based oracle generation method for unit testing belief-desire-intention agents. We develop a fault model based on the features of the core units to capture the types of faults that may be encountered and define how to automatically generate a partial, passive oracle from the agent design models. We evaluate both the fault model and the oracle generation by testing 14 agent systems. Over 400 issues were raised, and these were analyzed to ascertain whether they represented genuine faults or were false positives. We found that over 70 percent of issues raised were indicative of problems in either the design or the code. Of the 19 checks performed by our oracle, faults were found by all but 5 of these checks. We also found that 8 out the 11 fault types identified in our fault model exhibited at least one fault. The evaluation indicates that the fault model is a productive conceptualization of the problems to be expected in agent unit testing and that the oracle is able to find a substantial number of such faults with relatively small overhead in terms of false positives.", "", "Abstract Software testing is done to find errors and fix them to improve quality. Software testing is a laborious and time–consuming work which spends almost 50 of the software system development resources. The number of test cases required to develop error-free software is high and when done manually the process of test case generation is time consuming and error prone. As systems are increasing in complexity, more systems perform mission-critical functions and the dependability requirements such as safety, reliability and security are vital to the users of the system. The competitive market place forces companies to define new approaches to reduce time-to-market as well as development cost of the systems. To increase the effectiveness and efficiency of the testing process and to reduce the overall development cost for software system, new approaches are required for test automation. The automation of test case generation is the most important aspect of test automation. No powerful test data generation tools combining structural and functional testing are commercially available today. The objective of the proposed system is to automate the process of test case generation for both structural and model based testing by deployment of an agent based framework. This framework automatically generates an optimized Test Suite for system under test that can assist the testers and developers and enable them to correct the errors in earlier stages of software development. The deployment of agent based framework reduces considerable amount of execution time when compared with the system without agents.", "Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
. Researchers have considered only temporal annotations in text corpora to improve retrieval effectiveness by analyzing the time sensitivity of keyword queries and incorporating the time dimension in retrieval models. Some methods of analysis of time-sensitive queries rely on publication dates of documents @cite_1 @cite_9 , while others also look at the temporal expressions in document contents @cite_15 . Several works also take into account the time dimension for re-ranking documents @cite_5 and diversifying them along time @cite_8 @cite_30 . One of the seminal works in extracting temporal events was by Ling and Weld @cite_23 . They outline a probabilistic model to solve the problem of extracting relations from text with temporal constraints.
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_9", "@cite_1", "@cite_23", "@cite_5", "@cite_15" ], "mid": [ "", "157450042", "1486587593", "2044002869", "2123167824", "1511283908", "" ], "abstract": [ "", "Search result diversification is a common technique for tackling the problem of ambiguous and multi-faceted queries by maximizing query aspects or subtopics in a result list. In some special cases, subtopics associated to such queries can be temporally ambiguous, for instance, the query US Open is more likely to be targeting the tennis open in September, and the golf tournament in June. More precisely, users' search intent can be identified by the popularity of a subtopic with respect to the time where the query is issued. In this paper, we study search result diversification for time-sensitive queries, where the temporal dynamics of query subtopics are explicitly determined and modeled into result diversification. Unlike aforementioned work that, in general, considered only static subtopics, we leverage dynamic subtopics by analyzing two data sources i.e., query logs and a document collection. By using these data sources, it provides the insights from different perspectives of how query subtopics change over time. Moreover, we propose novel time-aware diversification methods that leverage the identified dynamic subtopics. A key idea is to re-rank search results based on the freshness and popularity of subtopics. To this end, our experimental results show that the proposed methods can significantly improve the diversity and relevance effectiveness for time-sensitive queries in comparison with state-of-the-art methods.", "Recent work on analyzing query logs shows that a significant fraction of queries are temporal, i.e., relevancy is dependent on time, and temporal queries play an important role in many domains, e.g., digital libraries and document archives. Temporal queries can be divided into two types: 1) those with temporal criteria explicitly provided by users, and 2) those with no temporal criteria provided. In this paper, we deal with the latter type of queries, i.e., queries that comprise only keywords, and their relevant documents are associated to particular time periods not given by the queries. We propose a number of methods to determine the time of queries using temporal language models. After that, we show how to increase the retrieval effectiveness by using the determined time of queries to re-rank the search results. Through extensive experiments we show that our proposed approaches improve retrieval effectiveness.", "Documents with timestamps, such as email and news, can be placed along a timeline. The timeline for a set of documents returned in response to a query gives an indication of how documents relevant to that query are distributed in time. Examining the timeline of a query result set allows us to characterize both how temporally dependent the topic is, as well as how relevant the results are likely to be. We outline characteristic patterns in query result set timelines, and show experimentally that we can automatically classify documents into these classes. We also show that properties of the query result set timeline can help predict the mean average precision of a query. These results show that meta-features associated with a query can be combined with text retrieval techniques to improve our understanding and treatment of text search on documents with timestamps.", "Research on information extraction (IE) seeks to distill relational tuples from natural language text, such as the contents of the WWW. Most IE work has focussed on identifying static facts, encoding them as binary relations. This is unfortunate, because the vast majority of facts are fluents, only holding true during an interval of time. It is less helpful to extract PresidentOf(Bill-Clinton, USA) without the temporal scope 1 20 93 - 1 20 01. This paper presents TIE, a novel, information-extraction system, which distills facts from text while inducing as much temporal information as possible. In addition to recognizing temporal relations between times and events, TIE performs global inference, enforcing transitivity to bound the start and ending times for each event. We introduce the notion of temporal entropy as a way to evaluate the performance of temporal IE systems and present experiments showing that TIE outperforms three alternative approaches.", "This work addresses information needs that have a temporal dimension conveyed by a temporal expression in the user’s query. Temporal expressions such as “in the 1990s” are frequent, easily extractable, but not leveraged by existing retrieval models. One challenge when dealing with them is their inherent uncertainty. It is often unclear which exact time interval a temporal expression refers to. We integrate temporal expressions into a language modeling approach, thus making them first-class citizens of the retrieval model and considering their inherent uncertainty. Experiments on the New York Times Annotated Corpus using Amazon Mechanical Turk to collect queries and obtain relevance assessments demonstrate that our approach yields substantial improvements in retrieval effectiveness.", "" ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
. One of the most important seminal works in identifying existing and emerging events were the various tasks in (TDT) @cite_34 . The TDT program aimed to search, organize and structure'' broadcast news media from multiple sources. The five tasks laid within the ambit of TDT where topic tracking, link detection, topic detection, first story detection, and story segmentation. The task of topic tracking required to build a system to detect from an evaluation corpus after being trained on a set stories. The link detection task involved answering a boolean query to whether two given are related by a common topic. The topic detection task comprised of declaring new from incoming which had not been presented to the system. First story detection was another boolean decision task of determining whether the given is a seed story (first-story) to create a new cluster. Story segmentation task required segmentation of an incoming stream of text into .
{ "cite_N": [ "@cite_34" ], "mid": [ "1594112393" ], "abstract": [ "Topic Detection and Tracking: Event-based Information Organization brings together in one place state-of-the-art research in Topic Detection and Tracking (TDT). This collection of technical papers from leading researchers in the field not only provides several chapters devoted to the research program and its evaluation paradigm, but also presents the most current research results and describes some of the remaining open challenges. Topic Detection and Tracking: Event-based Information Organization is an excellent reference for researchers and practitioners in a variety of fields related to TDT, including information retrieval, automatic speech recognition, machine learning, and information extraction" ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
Focusing specifically on extracting and summarizing events in the future, Jatowt and Yeung @cite_18 present a model-based clustering algorithm. The clustering considers both textual and temporal similarities. For computing temporal similarity, the authors model time as a probability distribution by utilizing different family of distributions based on whether its is singular time point, starting date or and ending date. The similarity is then computed using KL-divergence.
{ "cite_N": [ "@cite_18" ], "mid": [ "2147528525" ], "abstract": [ "News articles often contain information about the future. Given the huge volume of information available nowadays, an automatic way for extracting and summarizing future-related information is desirable. Such information will allow people to obtain a collective image of the future, to recognize possible future scenarios and be prepared for the future events. We propose a model-based clustering algorithm for detecting future events based on information extracted from a text corpus. The algorithm takes into account both textual and temporal similarity of sentences. We demonstrate that our algorithm can be used to discover future events and estimate their probabilities over time." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
@cite_24 present an algorithm , which based on the past events in text is able to predict a future event given a query to the system. The events are represented as multidimensional attributes such as time, geographic location and participating entities. The algorithm derives these events from external text collection and builds an , which is the result from hierarchical agglomerative clustering. In order to predict the future is trained to select the most similar cluster from the and produce an event representation.
{ "cite_N": [ "@cite_24" ], "mid": [ "2126332037" ], "abstract": [ "Given a current news event, we tackle the problem of generating plausible predictions of future events it might cause. We present a new methodology for modeling and predicting such future news events using machine learning and data mining techniques. Our Pundit algorithm generalizes examples of causality pairs to infer a causality predictor. To obtain precisely labeled causality examples, we mine 150 years of news articles and apply semantic natural language modeling techniques to headlines containing certain predefined causality patterns. For generalization, the model uses a vast number of world knowledge ontologies. Empirical evaluation on real news articles shows that our Pundit algorithm performs as well as non-expert humans." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
The work by Yeung and Jatowt @cite_7 tackles the problem of analysis of historical events in multiple large document collections. They utilize to identify distributions along time. Thereafter they perform analytics to answer questions such as significant years and topics, triggers that caused remembrance of the past and historical similarity of countries.
{ "cite_N": [ "@cite_7" ], "mid": [ "2127743235" ], "abstract": [ "History helps us understand the present and even to predict the future to certain extent. Given the huge amount of data about the past, we believe computer science will play an increasingly important role in historical studies, with computational history becoming an emerging interdisciplinary field of research. We attempt to study how the past is remembered through large scale text mining. We achieve this by first collecting a large dataset of news articles about different countries and analyzing the data using computational and statistical tools. We show that analysis of references to the past in news articles allows us to gain a lot of insight into the collective memories and societ al views of different countries. Our work demonstrates how various computational tools can assist us in studying history by revealing interesting topics and hidden correlations. Our ultimate objective is to enhance history writing and evaluation with the help of algorithmic support." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
Most recently, Abujabal and Berberich @cite_33 present a system which identifies important events in text collections by counting frequent itemsets of sentences containing named entities and temporal expressions. For evaluation they resort to event directory as a ground truth.
{ "cite_N": [ "@cite_33" ], "mid": [ "762870823" ], "abstract": [ "We address the problem of identifying important events in the past, present, and future from semantically-annotated large-scale document collections. Semantic annotations that we consider are named entities (e.g., persons, locations, organizations) and temporal expressions (e.g., during the 1990s). More specifically, for a given time period of interest, our objective is to identify, rank, and describe important events that happened. Our approach P2F Miner makes use of frequent itemset mining to identify events and group sentences related to them. It uses an information-theoretic measure to rank identified events. For each of them, it selects a representative sentence as a description. Experiments on ClueWeb09 using events listed in Wikipedia year articles as ground truth show that our approach is effective and outperforms a baseline based on statistical language models." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
. Summarizing text collections in a timeline visualization is a natural choice. Swan and Allan @cite_19 present an approach for producing a timeline that depicts most important topics and events closely modeled on the task. The algorithms analyzes features based on named entities and noun phrases. The analysis involves construction of @math contingency table on presence or absence of features, and subsequent measurement of @math statistic for measuring significance of co-occurrence of a pair of features.
{ "cite_N": [ "@cite_19" ], "mid": [ "2005492507" ], "abstract": [ "We present a statistical model of feature occurrence over time, and develop tests based on classical hypothesis testing for significance of term appearance on a given date. Using additional classical hypothesis testing we are able to combine these terms to generate “topics” as defined by the Topic Detection and Tracking study. The groupings of terms obtained can be used to automatically generate an interactive timeline displaying the major events and topics covered by the corpus. To test the validity of our technique we extracted a large number of these topics from a test corpus and had human evaluators judge how well the selected features captured the gist of the topics, and how they overlapped with a set of known topics from the corpus. The resulting topics were highly rated by evaluators who compared them to known topics." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
Bast and Buchhold @cite_16 outline a joint index structure over ontologies and text. Which allows for fast semantic search and provide context sensitive auto-complete suggestions.
{ "cite_N": [ "@cite_16" ], "mid": [ "2007269563" ], "abstract": [ "In this paper we present a novel index data structure tailored towards semantic full-text search. Semantic full-text search, as we call it, deeply integrates keyword-based full-text search with structured search in ontologies. Queries are SPARQL-like, with additional relations for specifying word-entity co-occurrences. In order to build such queries the user needs to be guided. We believe that incremental query construction with context-sensitive suggestions in every step serves that purpose well. Our index has to answer queries and provide such suggestions in real time. We achieve this through a novel kind of posting lists and query processing, avoiding very long (intermediate) result lists and expensive (non-local) operations on these lists. In an evaluation of 8000 queries on the full English Wikipedia (40 GB XML dump) and the YAGO ontology (26.6 million facts), we achieve average query and suggestion times of around 150ms." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
Events as a means of search document collections has also been explored by Str " o tgen and Gertz @cite_0 . Events were modeled by the geographic location and time of their occurrence. For temporal queries expressed in simple natural language they outline an extended Backus-Naur form (EBNF) language that incorporates time intervals with standard boolean operations. Geographical queries are also modeled as EBNF language, however the input for them is a minimum bounding rectangle (MBR). Using this multidimensional querying model the user is able to visualize search results in form of events; which are additionally represented on a map.
{ "cite_N": [ "@cite_0" ], "mid": [ "2087232451" ], "abstract": [ "Textual data ranging from corpora of digitized historic documents to large collections of news feeds provide a rich source for temporal and geographic information. Such types of information have recently gained a lot of interest in support of different search and exploration tasks, e.g., by organizing news along a timeline or placing the origin of documents on a map. However, for this, temporal and geographic information embedded in documents is often considered in isolation. We claim that through combining such information into (chronologically ordered) event-like features interesting and meaningful search and exploration tasks are possible. In this paper, we present a framework for the extraction, exploration, and visualization of event information in document collections. For this, one has to identify and combine temporal and geographic expressions from documents, thus enriching a document collection by a set of normalized events. Traditional search queries then can be enriched by conditions on the events relevant to the search subject. Most important for our event-centric approach is that a search result consists of a sequence of events relevant to the search terms and not just a document hit-list. Such events can originate from different documents and can be further explored, in particular events relevant to a search query can be ordered chronologically. We demonstrate the utility of our framework by different (multilingual) search and exploration scenarios using a Wikipedia corpus." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
Giving special attention to geographical information retrieval, @cite_27 present a system , that is able to resolve and pinpoint a news article based on the geographic information present in its content. They discuss various methods for toponymn resolution, which is in essence disambiguating the geographic location based on its surface form in the news content. The system involves a streaming clustering algorithm that can keep track of emerging news in new locations and present them in a map-based interface.
{ "cite_N": [ "@cite_27" ], "mid": [ "2068902018" ], "abstract": [ "Use this map query interface to search the world, even when not sure what information you seek." ] }
1603.00260
2257965379
In this article, I present the questions that I seek to answer in my PhD research. I posit to analyze natural language text with the help of semantic annotations and mine important events for navigating large text corpora. Semantic annotations such as named entities, geographic locations, and temporal expressions can help us mine events from the given corpora. These events thus provide us with useful means to discover the locked knowledge in them. I pose three problems that can help unlock this knowledge vault in semantically annotated text corpora: i. identifying important events; ii. semantic search; iii. and event analytics.
. By disambiguating and linking named entities to ontologies, @cite_26 @cite_12 provide a framework for semantic search and performing analytics on them. They provide features for giving auto-complete suggestions in the form of similar entities for the input named-entity. @cite_26 they provide analytics that leverage accurate entity counts and provide entity co-occurrence statistics which is helpful in analyzing semantically similar named-entities.
{ "cite_N": [ "@cite_26", "@cite_12" ], "mid": [ "2122494037", "2021682442" ], "abstract": [ "This paper describes an advanced news analytics and exploration system that allows users to visualize trends of entities like politicians, countries, and organizations in continuously updated news articles. Our system improves state-of-the-art text analytics by linking ambiguous names in news articles to entities in knowledge bases like Freebase, DBpedia or YAGO. This step enables indexing entities and interpreting the contents in terms of entities. This way, the analysis of trends and co-occurrences of entities gains accuracy, and by leveraging the taxonomic type hierarchy of knowledge bases, also in expressiveness and usability. In particular, we can analyze not only individual entities, but also categories of entities and their combinations, including co-occurrences with informative text phrases. Our Web-based system demonstrates the power of this approach by insightful anecdotic analysis of recent events in the news.", "This paper describes an advanced search engine that supports users in querying documents by means of keywords, entities, and categories. Users simply type words, which are automatically mapped onto appropriate suggestions for entities and categories. Based on named-entity disambiguation, the search engine returns documents containing the query's entities and prominent entities from the query's categories." ] }
1603.00150
2289445082
Gaussian mixture alignment is a family of approaches that are frequently used for robustly solving the point-set registration problem. However, since they use local optimisation, they are susceptible to local minima and can only guarantee local optimality. Consequently, their accuracy is strongly dependent on the quality of the initialisation. This paper presents the first globally-optimal solution to the 3D rigid Gaussian mixture alignment problem under the L2 distance between mixtures. The algorithm, named GOGMA, employs a branch-and-bound approach to search the space of 3D rigid motions SE(3), guaranteeing global optimality regardless of the initialisation. The geometry of SE(3) was used to find novel upper and lower bounds for the objective function and local optimisation was integrated into the scheme to accelerate convergence without voiding the optimality guarantee. The evaluation empirically supported the optimality proof and showed that the method performed much more robustly on two challenging datasets than an existing globally-optimal registration solution.
There are many heuristic or stochastic methods for global alignment that are not guaranteed to converge. One class utilises stochastic optimisation techniques, such as particle filtering @cite_37 , genetic algorithms @cite_45 @cite_62 and simulated annealing @cite_43 @cite_31 . Another class is feature-based alignment, which exploits the transformation invariance of a local descriptor to build sparse feature correspondences, such as fast point feature histograms @cite_42 . The transformation can be found from the correspondences using random sampling @cite_42 , greedy algorithms @cite_39 , Hough transforms @cite_4 or branch-and-bound @cite_17 @cite_40 . @cite_58 is a recent example of a method that uses random sampling without features. It is a four-points congruent sets method that exploits a clever data structure to achieve linear-time performance, extending the original 4PCS algorithm @cite_7 .
{ "cite_N": [ "@cite_37", "@cite_62", "@cite_4", "@cite_7", "@cite_42", "@cite_39", "@cite_43", "@cite_40", "@cite_45", "@cite_31", "@cite_58", "@cite_17" ], "mid": [ "2064358676", "2019563423", "1982392942", "2064499898", "2160821342", "2099606917", "2071992612", "124242857", "2131684094", "", "2034950486", "2025062188" ], "abstract": [ "In this paper, we propose a new algorithm for pairwise rigid point set registration with unknown point correspondences. The main properties of our method are noise robustness, outlier resistance and global optimal alignment. The problem of registering two point clouds is converted to a minimization of a nonlinear cost function. We propose a new cost function based on an inverse distance kernel that significantly reduces the impact of noise and outliers. In order to achieve a global optimal registration without the need of any initial alignment, we develop a new stochastic approach for global minimization. It is an adaptive sampling method which uses a generalized BSP tree and allows for minimizing nonlinear scalar fields over complex shaped search spaces like, e.g., the space of rotations. We introduce a new technique for a hierarchical decomposition of the rotation space in disjoint equally sized parts called spherical boxes. Furthermore, a procedure for uniform point sampling from spherical boxes is presented. Tests on a variety of point sets show that the proposed registration method performs very well on noisy, outlier corrupted and incomplete data. For comparison, we report how two state-of-the-art registration algorithms perform on the same data sets.", "Most range data registration techniques are variants on the iterative closest point (ICP) algorithm, proposed by Y. Chen and G. Medioni (1991, Proceedings of the IEEE Conference on Robotics and Automation) and P. J. Besl and N. D. McKay ( 1992, IEEE Trans. Pattern Anal. Mach. Intell. 14, 239-256). That algorithm, though, is only one approach to optimizing a least-squares point correspondence sum proposed by K. S. Arun. T. Huang, and S. D. Blostein (1987, IEEE Trans. Pattern Anal. Mach. Intell, 9, 698-700). In its basic form ICP has many problems, for example, its reliance on preregistration by hand close to the global minimum and its tendency to converge to suboptimal or incorrect solutions. This paper reports on an evolutionary registration algorithm which does not require initial prealignment and has a very broad basin of convergence. It searches many areas of a registration parameter space in parallel and has available to it a selection of evolutionary techniques to avoid local minima which plague both ICP and its variants.", "In applying the Hough transform to the problem of 3D shape recognition and registration, we develop two new and powerful improvements to this popular inference method. The first, intrinsic Hough, solves the problem of exponential memory requirements of the standard Hough transform by exploiting the sparsity of the Hough space. The second, minimum-entropy Hough, explains away incorrect votes, substantially reducing the number of modes in the posterior distribution of class and pose, and improving precision. Our experiments demonstrate that these contributions make the Hough transform not only tractable but also highly accurate for our example application. Both contributions can be applied to other tasks that already use the standard Hough transform.", "We introduce 4PCS, a fast and robust alignment scheme for 3D point sets that uses wide bases, which are known to be resilient to noise and outliers. The algorithm allows registering raw noisy data, possibly contaminated with outliers, without pre-filtering or denoising the data. Further, the method significantly reduces the number of trials required to establish a reliable registration between the underlying surfaces in the presence of noise, without any assumptions about starting alignment. Our method is based on a novel technique to extract all coplanar 4-points sets from a 3D point set that are approximately congruent, under rigid transformation, to a given set of coplanar 4-points. This extraction procedure runs in roughly O(n2 + k) time, where n is the number of candidate points and k is the number of reported 4-points sets. In practice, when noise level is low and there is sufficient overlap, using local descriptors the time complexity reduces to O(n + k). We also propose an extension to handle similarity and affine transforms. Our technique achieves an order of magnitude asymptotic acceleration compared to common randomized alignment techniques. We demonstrate the robustness of our algorithm on several sets of multiple range scans with varying degree of noise, outliers, and extent of overlap.", "In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment).", "We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes.", "Concerns the problem of range image registration for the purpose of building surface models of 3D objects. The registration task involves finding the translation and rotation parameters which properly align overlapping views of the object so as to reconstruct from these partial surfaces, an integrated surface representation of the object. The registration task is expressed as an optimization problem. We define a function which measures the quality of the alignment between the partial surfaces contained in two range images as produced by a set of motion parameters. This function computes a sum of Euclidean distances from control points on one surfaces to corresponding points on the other. The strength of this approach is in the method used to determine point correspondences. It reverses the rangefinder calibration process, resulting in equations which can be used to directly compute the location of a point in a range image corresponding to an arbitrary point in 3D space. A stochastic optimization technique, very fast simulated reannealing (VFSR), is used to minimize the cost function. Dual-view registration experiments yielded excellent results in very reasonable time. A multiview registration experiment took a long time. A complete surface model was then constructed from the integration of multiple partial views. The effectiveness with which registration of range images can be accomplished makes this method attractive for many practical applications where surface models of 3D objects must be constructed. >", "A popular approach to detect outliers in a data set is to find the largest consensus set, that is to say maximizing the number of inliers and estimating the underlying model. RANSAC is the most widely used method for this aim but is non-deterministic and does not guarantee to return the optimal solution. In this paper, we consider a rotation model and we present a new approach that performs consensus set maximization in a mathematically guaranteed globally optimal way. We solve the problem by a branch-and-bound framework associated with a rotation space search. Our mathematical formulation can be applied for various computer vision tasks such as panoramic image stitching, 3D registration with a rotating range sensor and line clustering and vanishing point estimation. Experimental results with synthetic and real data sets have successfully confirmed the validity of our approach.", "This paper addresses the range image registration problem for views having low overlap and which may include substantial noise. The current state of the art in range image registration is best represented by the well-known iterative closest point (ICP) algorithm and numerous variations on it. Although this method is effective in many domains, it nevertheless suffers from two key limitations: it requires prealignment of the range surfaces to a reasonable starting point; and it is not robust to outliers arising either from noise or low surface overlap. This paper proposes a new approach that avoids these problems. To that end, there are two key, novel contributions in this work: a new, hybrid genetic algorithm (GA) technique, including hill climbing and parallel-migration, combined with a new, robust evaluation metric based on surface interpenetration. Up to now, interpenetration has been evaluated only qualitatively; we define the first quantitative measure for it. Because they search in a space of transformations, GA are capable of registering surfaces even when there is low overlap between them and without need for prealignment. The novel GA search algorithm we present offers much faster convergence than prior GA methods, while the new robust evaluation metric ensures more precise alignments, even in the presence of significant noise, than mean squared error or other well-known robust cost functions. The paper presents thorough experimental results to show the improvements realized by these two contributions.", "", "Data acquisition in large-scale scenes regularly involves accumulating information across multiple scans. A common approach is to locally align scan pairs using Iterative Closest Point (ICP) algorithm (or its variants), but requires static scenes and small motion between scan pairs. This prevents accumulating data across multiple scan sessions and or different acquisition modalities (e.g., stereo, depth scans). Alternatively, one can use a global registration algorithm allowing scans to be in arbitrary initial poses. The state-of-the-art global registration algorithm, 4PCS, however has a quadratic time complexity in the number of data points. This vastly limits its applicability to acquisition of large environments. We present Super 4PCS for global pointcloud registration that is optimal, i.e., runs in linear time (in the number of data points) and is also output sensitive in the complexity of the alignment problem based on the (unknown) overlap across scan pairs. Technically, we map the algorithm as an 'instance problem' and solve it efficiently using a smart indexing data organization. The algorithm is simple, memory-efficient, and fast. We demonstrate that Super 4PCS results in significant speedup over alternative approaches and allows unstructured efficient acquisition of scenes at scales previously not possible. Complete source code and datasets are available for research use at http: geometry.cs.ucl.ac.uk projects 2014 super4PCS .", "We present an algorithm for the automatic alignment of two 3D shapes (data and model), without any assumptions about their initial positions. The algorithm computes for each surface point a descriptor based on local geometry that is robust to noise. A small number of feature points are automatically picked from the data shape according to the uniqueness of the descriptor value at the point. For each feature point on the data, we use the descriptor values of the model to find potential corresponding points. We then develop a fast branch-and-bound algorithm based on distance matrix comparisons to select the optimal correspondence set and bring the two shapes into a coarse alignment. The result of our alignment algorithm is used as the initialization to ICP (iterative closest point) and its variants for fine registration of the data to the model. Our algorithm can be used for matching shapes that overlap only over parts of their extent, for building models from partial range scans, as well as for simple symmetry detection, and for matching shapes undergoing articulated motion." ] }
1602.09140
2288235352
An information reconciliation method for continuous-variable quantum key distribution with Gaussian modulation that is based on non-binary low-density parity-check (LDPC) codes is presented. Sets of regular and irregular LDPC codes with different code rates over the Galois fields @math , @math , @math , and @math have been constructed. We have performed simulations to analyze the efficiency and the frame error rate using the sum-product algorithm. The proposed method achieves an efficiency between @math and @math if the signal-to-noise ratio is between @math dB and @math dB.
While LDPC codes over alphabets with more than two elements have already been introduced in the classic work by Gallagher @cite_31 , Davey and MacKay first reported that non-binary LDPC codes can outperform their binary counterparts under the message-passing algorithm over the BSC and the binary input additive white Gaussian noise channel (BI-AWGNC) @cite_32 . This behavior is attributed to the fact that the non-binary graph contains in general much fewer cycles than the corresponding binary graph @cite_13 . Motivated by this fact, non-binary LDPC codes have been used in @cite_38 to improve the efficiency of information reconciliation in DV QKD.
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_13", "@cite_32" ], "mid": [ "2061850779", "2128765501", "1536930200", "2173776306" ], "abstract": [ "We study the information reconciliation (IR) scheme for quantum key distribution (QKD) protocols. The IR for the QKD can be seen as the asymmetric Slepian-Wolf problem, which low-density parity-check (LDPC) codes can solve with efficient algorithms, i.e., the belief propagation. However, the LDPC codes are needed to be chosen properly from a collection of codes optimized for multiple key rates, which leads to complex decoder devices and performance degradation for unoptimized key rates. Therefore, it is desired that establish an IR scheme with a single LDPC code which supports multiple rates. To this end, in this paper, we propose an IR scheme with a rate-compatible non-binary LDPC code. Numerical results show the proposed scheme achieves IR efficiency comparable to the best know conventional IR scheme with lower decoding error rates.", "A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.", "Having trouble deciding which coding scheme to employ, how to design a new scheme, or how to improve an existing system? This summary of the state-of-the-art in iterative coding makes this decision more straightforward. With emphasis on the underlying theory, techniques to analyse and design practical iterative coding systems are presented. Using Gallager's original ensemble of LDPC codes, the basic concepts are extended for several general codes, including the practically important class of turbo codes. The simplicity of the binary erasure channel is exploited to develop analytical techniques and intuition, which are then applied to general channel models. A chapter on factor graphs helps to unify the important topics of information theory, coding and communication theory. Covering the most recent advances, this text is ideal for graduate students in electrical engineering and computer science, and practitioners. Additional resources, including instructor's solutions and figures, available online: www.cambridge.org 9780521852296.", "Gallager's (1962) low-density binary parity check codes have been shown to have near-Shannon limit performance when decoded using a probabilistic decoding algorithm. We report the empirical results of error-correction using the analogous codes over GF(q) for q>2, with binary symmetric channels and binary Gaussian channels. We find a significant improvement over the performance of the binary codes, including a rate 1 4 code with bit error probability <10 sup -5 at E sub b N sub 0 =0.2 dB." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
Among the first approaches for compressing large-scale graphs is a work by Boldi and Vigna @cite_3 , who compress web graphs using a lexicographical order of the URLs. Their compression method relies on two properties: locality (most links lead to pages within the same host) and similarity (pages on the same host often share the same links). Later Apostolico and Drovandi @cite_36 suggest one of the first ways to compress a graph assuming no a priori knowledge of the graph. The technique is based on a breadth-first traversal of the graph vertices and achieves a better compression rate using an entropy-based encoding.
{ "cite_N": [ "@cite_36", "@cite_3" ], "mid": [ "2018900730", "1994727615" ], "abstract": [ "The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10 over existing methods.", "Studying web graphs is often difficult due to their large size. Recently,several proposals have been published about various techniques that allow tostore a web graph in memory in a limited space, exploiting the inner redundancies of the web. The WebGraph framework is a suite of codes, algorithms and tools that aims at making it easy to manipulate large web graphs. This papers presents the compression techniques used in WebGraph, which are centred around referentiation and intervalisation (which in turn are dual to each other). WebGraph can compress the WebBase graph (118 Mnodes, 1 Glinks)in as little as 3.08 bits per link, and its transposed version in as littleas 2.89 bits per link." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
@cite_25 consider the theoretical aspect of the reordering problem motivated by compressing social networks. They develop a simple but practical heuristic for the problem, called shingle ordering . The heuristic is based on obtaining a fingerprint of the neighbors of a vertex and positioning vertices with identical fingerprints close to each other. If the fingerprint can capture locality and similarity of the vertices, then it can be effective for compression. This approach is also called minwise hashing and was originally applied by Broder @cite_12 for finding duplicate web pages.
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "2029852131", "2132069633" ], "abstract": [ "Motivated by structural properties of the Web graph that support efficient data structures for in memory adjacency queries, we study the extent to which a large network can be compressed. Boldi and Vigna (WWW 2004), showed that Web graphs can be compressed down to three bits of storage per edge; we study the compressibility of social networks where again adjacency queries are a fundamental primitive. To this end, we propose simple combinatorial formulations that encapsulate efficient compressibility of graphs. We show that some of the problems are NP-hard yet admit effective heuristics, some of which can exploit properties of social networks such as link reciprocity. Our extensive experiments show that social networks and the Web graph exhibit vastly different compressibility characteristics.", "Given two documents A and B we define two mathematical notions: their resemblance r(A, B) and their containment c(A, B) that seem to capture well the informal notions of \"roughly the same\" and \"roughly contained.\" The basic idea is to reduce these issues to set intersection problems that can be easily evaluated by a process of random sampling that can be done independently for each document. Furthermore, the resemblance can be evaluated using a fixed size sample for each document. This paper discusses the mathematical properties of these measures and the efficient implementation of the sampling process using Rabin (1981) fingerprints." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
@cite_23 suggest a reordering algorithm, called Layered Label Propagation , to compress social networks. The algorithm is built on a scalable graph clustering technique by label propagation @cite_27 . The idea is to assign a label for every vertex of a graph based on the labels of its neighbors. The process is executed in rounds until no more updates take place. Since the standard label propagation described in @cite_27 tends to produce a giant cluster, the authors of @cite_23 construct a hierarchy of clusters. The vertices of the same cluster are then placed together in the final order.
{ "cite_N": [ "@cite_27", "@cite_23" ], "mid": [ "2132202037", "2082773934" ], "abstract": [ "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.", "We continue the line of research on graph compression started with WebGraph, but we move our focus to the compression of social networks in a proper sense (e.g., LiveJournal): the approaches that have been used for a long time to compress web graphs rely on a specific ordering of the nodes (lexicographical URL ordering) whose extension to general social networks is not trivial. In this paper, we propose a solution that mixes clusterings and orders, and devise a new algorithm, called Layered Label Propagation, that builds on previous work on scalable clustering and can be used to reorder very large graphs (billions of nodes). Our implementation uses task decomposition to perform aggressively on multi-core architecture, making it possible to reorder graphs of more than 600 millions nodes in a few hours. Experiments performed on a wide array of web graphs and social networks show that combining the order produced by the proposed algorithm with the WebGraph compression framework provides a major increase in compression with respect to all currently known techniques, both on web graphs and on social networks. These improvements make it possible to analyse in main memory significantly larger graphs." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
The three-step multiscale paradigm is often employed for the graph ordering problems. First, a sequence of coarsened graphs, each approximating the original graph but having a smaller size, is created. Then the problem is solved on the coarsest level by an exhaustive search. Finally, the process is reverted by an uncoarsening procedure so that a solution for every graph in the sequence is based on the solution for a previous smaller graph. Safro and Temkin @cite_7 employ the algebraic multigrid methodology in which the sequence of coarsened graphs is constructed using a projection of graph Laplacians into a lower-dimensional space.
{ "cite_N": [ "@cite_7" ], "mid": [ "2020378392" ], "abstract": [ "We present a fast multiscale approach for the network minimum logarithmic arrangement problem. This type of arrangement plays an important role in the network compression and fast node link access operations. The algorithm is of linear complexity and exhibits good scalability, which makes it practical and attractive for use in large-scale instances. Its effectiveness is demonstrated on a large set of real-life networks. These networks with corresponding best-known minimization results are suggested as an open benchmark for the research community to evaluate new methods for this problem." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
Spectral methods have also been successfully applied to graph ordering problems @cite_19 . Sequencing the vertices is done by sorting them according to corresponding elements of the second smallest eigenvector of graph Laplacian (also called the Fiedler vector). It is known that the order yields the best non-trivial solution to a relaxation of the quadratic graph ordering problem , and hence, it is a good heuristic for computing linear arrangements.
{ "cite_N": [ "@cite_19" ], "mid": [ "2004265962" ], "abstract": [ "Abstract For several NP-hard optimal linear labeling problems, including the bandwidth, the cutwidth, and the min-sum problem for graphs, a heuristic algorithm is proposed which finds approximative solutions to these problems in polynomial time. The algorithm uses eigenvectors corresponding to the second smallest Laplace eigenvalue of a graph. Although bad in some “degenerate” cases, the algorithm shows fairly good behaviour. Several upper and lower bounds on the bandwidth, cutwidth, and min- p -sums are derived. Most of these bounds are given in terms of Laplace eigenvalues of the graphs. They are used in the analysis of our algorithm and as measures for the error of the obtained approximation to an optimal labeling." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
Recently Kang and Faloutsos @cite_18 present another technique, called SlashBurn . Their method constructs a permutation of graph vertices so that its adjacency matrix is comprised of a few nonzero blocks. Such dense blocks are easier to encode, which is beneficial for compression.
{ "cite_N": [ "@cite_18" ], "mid": [ "2060778826" ], "abstract": [ "Given a real world graph, how should we lay-out its edges? How can we compress it? These questions are closely related, and the typical approach so far is to find clique-like communities, like the ‘cavemen graph’, and compress them. We show that the block-diagonal mental image of the ‘cavemen graph’ is the wrong paradigm, in full agreement with earlier results that real world graphs have no good cuts. Instead, we propose to envision graphs as a collection of hubs connecting spokes, with super-hubs connecting the hubs, and so on, recursively. Based on the idea, we propose the SlashBurn method to recursively split a graph into hubs and spokes connected only by the hubs. We also propose techniques to select the hubs and give an ordering to the spokes, in addition to the basic SlashBurn. We give theoretical analysis of the proposed hub selection methods. Our view point has several advantages: (a) it avoids the ‘no good cuts’ problem, (b) it gives better compression, and (c) it leads to faster execution times for matrix-vector operations, which are the back-bone of most graph processing tools. Through experiments, we show that SlashBurn consistently outperforms other methods for all data sets, resulting in better compression and faster running time. Moreover, we show that SlashBurn with the appropriate spokes ordering can further improve compression while hardly sacrificing the running time." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
Several papers study how to assign document identifiers in a document collection for better compression of an inverted index. A popular idea is to perform a clustering on the collection and assign close identifiers to similar documents. @cite_5 propose a reassignment heuristic motivated by the maximum travelling salesman problem (TSP). The heuristic computes a pairwise similarity between every pairs of documents (proportional to the number of shared terms), and then finds the longest path traversing the documents in the graph. An alternative algorithm calculating cosine similarities between documents is suggested by Blandford and Blelloch @cite_15 . Both methods are computationally expensive and are limited to fairly small datasets. The similarity-based approach is later improved by Blanco and Barreiro @cite_13 and by @cite_2 , who make it scalable by reducing the size of the similarity graph, respectively through dimensionality reduction and locality sensitive hashing.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_13", "@cite_2" ], "mid": [ "2052867877", "1669813703", "2111215543", "2136070674" ], "abstract": [ "The inverted file is the most popular indexing mechanism for document search in an information retrieval system. Compressing an inverted file can greatly improve document search rate. Traditionally, the d-gap technique is used in the inverted file compression by replacing document identifiers with usually much smaller gap values. However, fluctuating gap values cannot be efficiently compressed by some well-known prefix-free codes. To smoothen and reduce the gap values, we propose a document-identifier reassignment algorithm. This reassignment is based on a similarity factor between documents. We generate a reassignment order for all documents according to the similarity to reassign closer identifiers to the documents having closer relationships. Simulation results show that the average gap values of sample inverted files can be reduced by 30 , and the compression rate of d-gapped inverted file with prefix-free codes can be improved by 15 .", "An important concern in the design of search engines is the construction of an inverted index. An inverted index, also called a concordance, contains a list of documents (or posting list) for every possible search term. These posting lists are usually compressed with difference coding. Difference coding yields the best compression when the lists to be coded have high locality. Coding methods have been designed to specifically take advantage of locality in inverted indices. Here, we describe an algorithm to permute the document numbers so as to create locality in an inverted index. This is done by clustering the documents. Our algorithm, when applied to the TREC ad hoc database (disks 4 and 5), improves the performance of the best difference coding algorithm we found by fourteen percent. The improvement increases as the size of the index increases, so we expect that greater improvements would be possible on larger datasets.", "Most modern retrieval systems use compressed Inverted Files (IF) for indexing. Recent works demonstrated that it is possible to reduce IF sizes by reassigning the document identifiers of the original collection, as it lowers the average distance between documents related to a single term. Variable-bit encoding schemes can exploit the average gap reduction and decrease the total amount of bits per document pointer. However, approximations developed so far requires great amounts of time or use an uncontrolled memory size. This paper presents an efficient solution to the reassignment problem consisting in reducing the input data dimensionality using a SVD transformation. We tested this approximation with the Greedy-NN TSP algorithm and one more efficient variant based on dividing the original problem in sub-problems. We present experimental tests and performance results in two TREC collections, obtaining good compression ratios with low running times. We also show experimental results about the tradeoff between dimensionality reduction and compression, and time performance.", "Web search engines depend on the full-text inverted index data structure. Because the query processing performance is so dependent on the size of the inverted index, a plethora of research has focused on fast end effective techniques for compressing this structure. Recently, several authors have proposed techniques for improving index compression by optimizing the assignment of document identifiers to the documents in the collection, leading to significant reduction in overall index size. In this paper, we propose improved techniques for document identifier assignment. Previous work includes simple and fast heuristics such as sorting by URL, as well as more involved approaches based on the Traveling Salesman Problem or on graph partitioning. These techniques achieve good compression but do not scale to larger document collections. We propose a new framework based on performing a Traveling Salesman computation on a reduced sparse graph obtained through Locality Sensitive Hashing. This technique achieves improved compression while scaling to tens of millions of documents. Based on this framework, we describe a number of new algorithms, and perform a detailed evaluation on three large data sets showing improvements in index size." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
The approach by Silvestri @cite_26 simply sorts the collection of web pages by their URLs and then assigns document identifiers according to the order. The method performs very well in practice and is highly scalable. This technique however does not generalize to document collections that do not have URL-like identifiers.
{ "cite_N": [ "@cite_26" ], "mid": [ "1524501441" ], "abstract": [ "The compression of Inverted File indexes in Web Search Engines has received a lot of attention in these last years. Compressing the index not only reduces space occupancy but also improves the overall retrieval performance since it allows a better exploitation of the memory hierarchy. In this paper we are going to empirically show that in the case of collections of Web Documents we can enhance the performance of compression algorithms by simply assigning identifiers to documents according to the lexicographical ordering of the URLs. We will validate this assumption by comparing several assignment techniques and several compression algorithms on a quite large document collection composed by about six million documents. The results are very encouraging since we can improve the compression ratio up to 40 using an algorithm that takes about ninety seconds to finish using only 100 MB of main memory." ] }
1602.08820
2292193262
Graph reordering is a powerful technique to increase the locality of the representations of graphs, which can be helpful in several applications. We study how the technique can be used to improve compression of graphs and inverted indexes. We extend the recent theoretical model of (KDD 2009) for graph compression, and show how it can be employed for compression-friendly reordering of social networks and web graphs and for assigning document identifiers in inverted indexes. We design and implement a novel theoretically sound reordering algorithm that is based on recursive graph bisection. Our experiments show a significant improvement of the compression rate of graph and indexes over existing heuristics. The new method is relatively simple and allows efficient parallel and distributed implementations, which is demonstrated on graphs with billions of vertices and hundreds of billions of edges.
Most graph compression schemes build on , that is, sorting the adjacency lists ( posting lists in the inverted indexes case) so that the gaps between consecutive elements are positive, and then encoding these gaps using a variable-length integer code. The WebGraph framework adds the ability to copy portions of the adjacency lists from other vertices, and has special cases for runs of consecutive integers. Introduced in 2004 by Boldi and Vigna @cite_3 , it is still widely used to compress web graphs and social networks.
{ "cite_N": [ "@cite_3" ], "mid": [ "1994727615" ], "abstract": [ "Studying web graphs is often difficult due to their large size. Recently,several proposals have been published about various techniques that allow tostore a web graph in memory in a limited space, exploiting the inner redundancies of the web. The WebGraph framework is a suite of codes, algorithms and tools that aims at making it easy to manipulate large web graphs. This papers presents the compression techniques used in WebGraph, which are centred around referentiation and intervalisation (which in turn are dual to each other). WebGraph can compress the WebBase graph (118 Mnodes, 1 Glinks)in as little as 3.08 bits per link, and its transposed version in as littleas 2.89 bits per link." ] }
1602.08845
2291807340
Big Data analytics has been approached exclusively from a data-parallel perspective, where data are partitioned to multiple workers ‐ threads or separate servers ‐ and model training is executed concurrently over different partitions, under various synchronization schemes that guarantee speedup and or convergence. The dual ‐ Big Model ‐ problem that, surprisingly, has received no attention in database analytics, is how to manage models with millions if not billions of parameters that do not fit in memory. This distinction in model representation changes fundamentally how in-database analytics tasks are carried out. In this paper, we introduce the first secondary storage array-relation dot-product join operator between a set of sparse arrays and a dense relation targeted. The paramount challenge in designing such an operator is how to optimally schedule access to the dense relation based on the sparse non-contiguous entries in the sparse arrays. We prove that this problem is NP-hard and propose a practical solution characterized by two important technical contributions— dynamic batch processing and array reordering. We execute extensive experiments over synthetic and real data that confirm the minimal overhead the operator incurs when sufficient memory is available and the graceful degradation it suffers as memory resources become scarce. Moreover, dot-product join achieves an order of magnitude reduction in execution time for Big Model analytics over alternative in-database solutions.
The integration of relational join with gradient, i.e., dot-product, computation has been studied in @cite_20 @cite_22 @cite_9 . However, the assumption made in all these papers is that the vectors @math are vertically partitioned along their dimensions. A join is required to put them together before computing the dot-product. @cite_20 , the dot-product computation is pushed inside the join and only applicable to BGD. The dot-product join operator adopts the same idea. However, this operator has to compute the dot-product which is still evaluated inside a UDA in @cite_20 . The join is dropped altogether in @cite_22 when similar convergence is obtained without considering the dimensions in an entire vertical partition. A solution particular to linear regression is shown to be efficient to compute when joining factorized tables in @cite_9 . In all these solutions, the model is small enough to fit entirely in memory. Moreover, they work exclusively for BGD.
{ "cite_N": [ "@cite_9", "@cite_22", "@cite_20" ], "mid": [ "2284514301", "2444650685", "2032775418" ], "abstract": [ "We investigate the problem of building least squares regression models over training datasets defined by arbitrary join queries on database tables. Our key observation is that joins entail a high degree of redundancy in both computation and data representation, which is not required for the end-to-end solution to learning over joins. We propose a new paradigm for computing batch gradient descent that exploits the factorized computation and representation of the training datasets, a rewriting of the regression objective function that decouples the computation of cofactors of model parameters from their convergence, and the commutativity of cofactor computation with relational union and projection. We introduce three flavors of this approach: F FDB computes the cofactors in one pass over the materialized factorized join; Favoids this materialization and intermixes cofactor and join computation; F SQL expresses this mixture as one SQL query. Our approach has the complexity of join factorization, which can be exponentially lower than of standard joins. Experiments with commercial, public, and synthetic datasets show that it outperforms MADlib, Python StatsModels, and R, by up to three orders of magnitude.", "Closer integration of machine learning (ML) with data processing is a booming area in both the data management industry and academia. Almost all ML toolkits assume that the input is a single table, but many datasets are not stored as single tables due to normalization. Thus, analysts often perform key-foreign key joins to obtain features from all base tables and apply a feature selection method, either explicitly or implicitly, with the aim of improving accuracy. In this work, we show that the features brought in by such joins can often be ignored without affecting ML accuracy significantly, i.e., we can \"avoid joins safely.\" We identify the core technical issue that could cause accuracy to decrease in some cases and analyze this issue theoretically. Using simulations, we validate our analysis and measure the effects of various properties of normalized data on accuracy. We apply our analysis to design easy-to-understand decision rules to predict when it is safe to avoid joins in order to help analysts exploit this runtime-accuracy trade-off. Experiments with multiple real normalized datasets show that our rules are able to accurately predict when joins can be avoided safely, and in some cases, this led to significant reductions in the runtime of some popular feature selection methods.", "Enterprise data analytics is a booming area in the data management industry. Many companies are racing to develop toolkits that closely integrate statistical and machine learning techniques with data management systems. Almost all such toolkits assume that the input to a learning algorithm is a single table. However, most relational datasets are not stored as single tables due to normalization. Thus, analysts often perform key-foreign key joins before learning on the join output. This strategy of learning after joins introduces redundancy avoided by normalization, which could lead to poorer end-to-end performance and maintenance overheads due to data duplication. In this work, we take a step towards enabling and optimizing learning over joins for a common class of machine learning techniques called generalized linear models that are solved using gradient descent algorithms in an RDBMS setting. We present alternative approaches to learn over a join that are easy to implement over existing RDBMSs. We introduce a new approach named factorized learning that pushes ML computations through joins and avoids redundancy in both I O and computations. We study the tradeoff space for all our approaches both analytically and empirically. Our results show that factorized learning is often substantially faster than the alternatives, but is not always the fastest, necessitating a cost-based approach. We also discuss extensions of all our approaches to multi-table joins as well as to Hive." ] }
1602.08845
2291807340
Big Data analytics has been approached exclusively from a data-parallel perspective, where data are partitioned to multiple workers ‐ threads or separate servers ‐ and model training is executed concurrently over different partitions, under various synchronization schemes that guarantee speedup and or convergence. The dual ‐ Big Model ‐ problem that, surprisingly, has received no attention in database analytics, is how to manage models with millions if not billions of parameters that do not fit in memory. This distinction in model representation changes fundamentally how in-database analytics tasks are carried out. In this paper, we introduce the first secondary storage array-relation dot-product join operator between a set of sparse arrays and a dense relation targeted. The paramount challenge in designing such an operator is how to optimally schedule access to the dense relation based on the sparse non-contiguous entries in the sparse arrays. We prove that this problem is NP-hard and propose a practical solution characterized by two important technical contributions— dynamic batch processing and array reordering. We execute extensive experiments over synthetic and real data that confirm the minimal overhead the operator incurs when sufficient memory is available and the graceful degradation it suffers as memory resources become scarce. Moreover, dot-product join achieves an order of magnitude reduction in execution time for Big Model analytics over alternative in-database solutions.
Dot-product join is a novel type of join operator between an ARRAY attribute and a relation. We are not aware of any other database operator with the same functionality. From a relational perspective, dot-product join is most similar to index join @cite_46 . However, for every vector @math , we have to probe the index on model @math many times. Thus, the number of probes can be several orders of magnitude the size of @math . The proposed techniques are specifically targeted at this scenario. The batched key access join https: dev.mysql.com doc refman 5.6 en bnl-bka-optimization.html in MySQL is identical to our batching optimization applied at vector level. However, it handles a single probe per tuple and its reordering is aimed at generating sequential storage access for @math . Array joins @cite_28 are a new class of join operators for array databases. While it is possible to view dot-product join as an array join operator, the main difference is that we consider a relational system and push the aggregation inside the join. This avoids the materialization of the intermediate join result.
{ "cite_N": [ "@cite_28", "@cite_46" ], "mid": [ "2011129635", "1569403765" ], "abstract": [ "Science applications are accumulating an ever-increasing amount of multidimensional data. Although some of it can be processed in a relational database, much of it is better suited to array-based engines. As such, it is important to optimize the query processing of these systems. This paper focuses on efficient query processing of join operations within an array database. These engines invariably chunk'' their data into multidimensional tiles that they use to efficiently process spatial queries. As such, traditional relational algorithms need to be substantially modified to take advantage of array tiles. Moreover, most n-dimensional science data is unevenly distributed in array space because its underlying observations rarely follow a uniform pattern. It is crucial that the optimization of array joins be skew-aware. In addition, owing to the scale of science applications, their query processing usually spans multiple nodes. This further complicates the planning of array joins. In this paper, we introduce a join optimization framework that is skew-aware for distributed joins. This optimization consists of two phases. In the first, a logical planner selects the query's algorithm (e.g., merge join), the granularity of the its tiles, and the reorganization operations needed to align the data. The second phase implements this logical plan by assigning tiles to cluster nodes using an analytical cost model. Our experimental results, on both synthetic and real-world data, demonstrate that this optimization framework speeds up array joins by up to 2.5X in comparison to the baseline.", "From the Publisher: This introduction to database systems offers a readable comprehensive approach with engaging, real-world examples—users will learn how to successfully plan a database application before building it. The first half of the book provides in-depth coverage of databases from the point of view of the database designer, user, and application programmer, while the second half of the book provides in-depth coverage of databases from the point of view of the DBMS implementor. The first half of the book focuses on database design, database use, and implementation of database applications and database management systems—it covers the latest database standards SQL:1999, SQL PSM, SQL CLI, JDBC, ODL, and XML, with broader coverage of SQL than most other books. The second half of the book focuses on storage structures, query processing, and transaction management—it covers the main techniques in these areas with broader coverage of query optimization than most other books, along with advanced topics including multidimensional and bitmap indexes, distributed transactions, and information integration techniques. A professional reference for database designers, users, and application programmers." ] }
1602.08845
2291807340
Big Data analytics has been approached exclusively from a data-parallel perspective, where data are partitioned to multiple workers ‐ threads or separate servers ‐ and model training is executed concurrently over different partitions, under various synchronization schemes that guarantee speedup and or convergence. The dual ‐ Big Model ‐ problem that, surprisingly, has received no attention in database analytics, is how to manage models with millions if not billions of parameters that do not fit in memory. This distinction in model representation changes fundamentally how in-database analytics tasks are carried out. In this paper, we introduce the first secondary storage array-relation dot-product join operator between a set of sparse arrays and a dense relation targeted. The paramount challenge in designing such an operator is how to optimally schedule access to the dense relation based on the sparse non-contiguous entries in the sparse arrays. We prove that this problem is NP-hard and propose a practical solution characterized by two important technical contributions— dynamic batch processing and array reordering. We execute extensive experiments over synthetic and real data that confirm the minimal overhead the operator incurs when sufficient memory is available and the graceful degradation it suffers as memory resources become scarce. Moreover, dot-product join achieves an order of magnitude reduction in execution time for Big Model analytics over alternative in-database solutions.
As we have already discussed in the paper, the dot-product join operator is a constrained formulation of the standard sparse matrix vector (SpMV) multiplication problem. Specifically, the constraint imposes the update of the vector after each multiplication with a row in the sparse matrix. This makes impossible the direct application of SpMV kernels to Big Model analytics---beyond BGD. Moreover, we consider the case when the vector size goes beyond the available memory. SpMV is an exhaustively studied problem with applications to high performance computing, graph algorithms, and analytics. An extended discussion of the recent work on SpMV is presented in @cite_33 ---on which we draw in our discussion. @cite_34 and @cite_40 propose optimizations for SpMV in multicore architectures, while @cite_10 and @cite_30 optimize distributed SpMV for large scale-free graphs with 2D partitioning to reduce communication between machines. Array databases such as SciDB @cite_58 and Rasdaman @cite_35 support SpMV as calls to optimized linear algebra libraries such as Intel MKL and Trilinos. There has also been preliminary research on accelerating matrix multiplication with GPUs @cite_21 and SSDs @cite_33 , showing that speedups are limited by I O and setup costs. None of these works store the vector in secondary storage.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_21", "@cite_58", "@cite_40", "@cite_34", "@cite_10" ], "mid": [ "", "2083036673", "", "2095836023", "2014830756", "1776498962", "1990832096", "2061919600" ], "abstract": [ "", "RasDaMan is a universal — i.e., domain-independent — array DBMS for multidimensional arrays of arbitrary size and structure. A declarative, SQL-based array query language offers flexible retrieval and manipulation. Efficient server-based query evaluation is enabled by an intelligent optimizer and a streamlined storage architecture based on flexible array tiling and compression. RasDaMan is being used in several international projects for the management of geo and healthcare data of various dimensionality.", "", "Scaling up the sparse matrix-vector multiplication kernel on modern Graphics Processing Units (GPU) has been at the heart of numerous studies in both academia and industry. In this article we present a novel non-parametric, self-tunable, approach to data representation for computing this kernel, particularly targeting sparse matrices representing power-law graphs. Using real web graph data, we show how our representation scheme, coupled with a novel tiling algorithm, can yield significant benefits over the current state of the art GPU efforts on a number of core data mining algorithms such as PageRank, HITS and Random Walk with Restart.", "SciDB [4, 3] is a new open-source data management system intended primarily for use in application domains that involve very large (petabyte) scale array data; for example, scientific applications such as astronomy, remote sensing and climate modeling, bio-science information management, risk management systems in financial applications, and the analysis of web log data. In this talk we will describe our set of motivating examples and use them to explain the features of SciDB. We then briefly give an overview of the project 'in flight', explaining our novel storage manager, array data model, query language, and extensibility frameworks.", "Intel Xeon Phi is a recently released high-performance coprocessor which features 61 cores each supporting 4 hardware threads with 512-bit wide SIMD registers achieving a peak theoretical performance of 1Tflop s in double precision. Its design differs from classical modern processors; it comes with a large number of cores, the 4-way hyperthreading capability allows many applications to saturate the massive memory bandwidth, and its large SIMD capabilities allow to reach high computation throughput. The core of many scientific applications involves the multiplication of a large, sparse matrix with a single or multiple dense vectors which are not compute-bound but memory-bound. In this paper, we investigate the performance of the Xeon Phi coprocessor for these sparse linear algebra kernels. We highlight the important hardware details and show that Xeon Phi’s sparse kernel performance is very promising and even better than that of cutting-edge CPUs and GPUs.", "We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.", "Eigensolvers are important tools for analyzing and mining useful information from scale-free graphs. Such graphs are used in many applications and can be extremely large. Unfortunately, existing parallel eigensolvers do not scale well for these graphs due to the high communication overhead in the parallel matrix-vector multiplication (MatVec). We develop a MatVec algorithm based on 2D edge partitioning that significantly reduces the communication costs and embed it into a popular eigensolver library. We demonstrate that the enhanced eigensolver can attain two orders of magnitude performance improvement compared to the original on a state-of-art massively parallel machine. We illustrate the performance of the embedded MatVec by computing eigenvalues of a scale-free graph with 300 million vertices and 5 billion edges, the largest scale-free graph analyzed by any in-memory parallel eigensolver, to the best of our knowledge." ] }
1602.08710
1676654508
This work addresses problems related to interference mitigation in a single wireless body area network (WBAN). In this paper, We propose a distributed ombined carrier sense multiple access with collision avoidance (CSMA CA) with lexible time division multiple access (DMA) scheme for nterference itigation in relay-assisted intra-WBAN, namely, CFTIM. In CFTIM scheme, non interfering sources (transmitters) use CSMA CA to communicate with relays. Whilst, high interfering sources and best relays use flexible TDMA to communicate with coordinator (C) through using stable channels. Simulation results of the proposed scheme are compared to other schemes and consequently CFTIM scheme outperforms in all cases. These results prove that the proposed scheme mitigates interference, extends WBAN energy lifetime and improves the throughput. To further reduce the interference level, we analytically show that the outage probability can be effectively reduced to the minimal.
Authors of @cite_9 propose an analytical framework to optimize the size of relay-zone around each source node. Authors of @cite_24 investigate the problem of coexistence of multiple non coordinated WBANs. This study provides better co-channel interference mitigation. However, more recent works conducted in @cite_22 propose a scheme for joint two-hop relay-assisted cooperative communication integrated with transmit power control. This scheme can reduce co-channel interference and extend the lifetime.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_22" ], "mid": [ "2040452784", "2096287546", "2159297871" ], "abstract": [ "In this paper, coexistence of multiple mobile wireless body area networks (WBANs), where there is no coordination between WBANs, is investigated for the case where the WBAN-of-interest employs cooperative communications. A decode-and-forward protocol with two dual-hop links, two relays and selection combining (SC) at the hub (or gateway device) is chosen for the WBAN-of-interest. A suitable time-division-multiple-access (TDMA) scheme is used, enabling intra-network and inter-network operation, to allocate slots for each Tx Rx link packet transmission. Realistic channel models are employed with various amounts of shadowing, small-scale fading and white noise introduced between WBANs. For the WBAN-of-interest, many hours of measured channel gain data is employed to emulate the channel for this WBAN. It is found that the chosen cooperative communications provides significantly better co-channel interference mitigation than single-link star topology WBAN communications in a mobile, dynamic, scenario, hence the signal-to-interference-plus-noise ratio (SINR) for 10 outage probability at the hub is greatly improved by up-to 12 dB. It is also demonstrated that the location of the hub, given three typical locations, has significant impact on the performance of the cooperative WBAN communications.", "We study the use of wireless relay channel in a one-hop sensor network with random packet arrival. Exploiting regular sensor nodes to serve in a wireless relay channel can increase the overall network capacity. However, due to asynchronous source transmission, the relays interfere with each other's transmission and reception. The fundamental trade-off between these two issues leads us to an optimization problem in which we find the optimum relay zone radius to maximize the overall sum rate of the network. We also propose a MAC protocol to choose the optimum number of sources allowed to transmit under this setting. The overall system capacity is proven to increase significantly under the proposed scheme, compared with cases where relay nodes are not exploited or where the relay zone radius is suboptimal", "A scheme for two-hop relay-assisted cooperative communications integrated with transmit power control, based on simple channel prediction, is presented. A large set of empirical on- and inter-body channel data is employed to model various scenarios of wireless body area network (WBAN) communica- tions, from one isolated WBAN up to 10 closely located WBANs coexisting. Our study shows that relay assisted power control can reduce approximately 60 circuit power consumption from that of constant transmission at 0 dBm, without much loss in reliability. Further, interference mitigation is significantly enhanced over constant transmission at −5 dBm, with similar power consumption. Such performance is maintained from 2 to 10 closely-located WBANs coexisting. And the joint algorithm works best for one isolated WBAN. A trade-off between power saving and interference mitigation is motivated, taking remaining sensor-battery level, amount of interference and on-body channel quality into account. Index Terms—Cooperative communications, interference mit- igation, transmit power control, wireless body area networks." ] }
1602.08658
2292788438
In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.
Recent studies show multi-hop schemes have a lower power consumption in comparison to one-hop scheme. However, using relays reduces the WBAN interference and consequently the power consumption. Authors of @cite_3 propose a single-relay cooperative scheme where the best relay is selected in a distributed fashion. Also, authors of @cite_6 propose a prediction-based dynamic relay transmission scheme through which the problem of "when to relay" and "who to relay" are decided in an optimal way. The interference problem among multiple co-located WBANs is investigated in @cite_3 . The authors show cooperative two relay communication with opportunistic relaying significantly mitigates WBAN interference.
{ "cite_N": [ "@cite_6", "@cite_3" ], "mid": [ "2003424643", "2156149655" ], "abstract": [ "To support long-term pervasive healthcare services, communications in Wireless Body Area Networks (WBANs) need to be both reliable and energy-efficient. As a cooperative transmission method, relay transmission scheme works effectively in resisting shadowing effect and improving reliability in WBANs. However, the extra energy consumption introduced by relay transmission is very high, which can shorten the lifetime of the whole network. In this paper, temporal and spatial correlation models for on-body channels are first presented to better characterize the slow fading effect of on-body channels. Then a prediction-based dynamic relay transmission (PDRT) scheme that makes full use of the correlation characteristics of on-body channels is proposed. In the PDRT scheme, “when to relay” and “who to relay” are decided in an optimal way based on the last known channel states. Moreover, neither extra signaling procedure nor dedicated channel sensing period is needed. Simulation results show that the PDRT scheme achieves significant performance improvement in energy efficiency, as well as ensuring the transmission reliability.", "In this paper, a cooperative two-hop communication scheme, together with opportunistic relaying (OR), is applied within a mobile wireless body area network (WBAN). Its effectiveness in interference mitigation is investigated in a scenario where there are multiple closely-located networks. Due to a typical WBAN's nature, no coordination is used among different WBANs. A suitable time-division-multiple-access (TDMA) scheme is adopted as both an intra-network and also an internetwork access scheme. Extensive on-body and off-body channel gain measurements are employed to gauge performance, which are overlaid to simulate a realistic WBAN working environment. It is found that opportunistic relaying is able to improve the signal-to-interference-plus-noise ratio (SINR) performance at an outage probability of 10 by an average of 5 dB, and it is also shown that it can reduce level crossing rate (LCR) significantly at low SINRs. Furthermore, this scheme is more efficient when on-body channels fade more rapidly." ] }
1602.08658
2292788438
In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.
Authors of @cite_7 investigate the problem of coexistence of multiple non coordinated WBANs. This study provides better co-channel interference mitigation. However, more recent works conducted in @cite_9 propose a scheme for joint two-hop relay-assisted cooperative communication integrated with transmit power control. This scheme can reduce co-channel interference and extend the lifetime.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2159297871", "2040452784" ], "abstract": [ "A scheme for two-hop relay-assisted cooperative communications integrated with transmit power control, based on simple channel prediction, is presented. A large set of empirical on- and inter-body channel data is employed to model various scenarios of wireless body area network (WBAN) communica- tions, from one isolated WBAN up to 10 closely located WBANs coexisting. Our study shows that relay assisted power control can reduce approximately 60 circuit power consumption from that of constant transmission at 0 dBm, without much loss in reliability. Further, interference mitigation is significantly enhanced over constant transmission at −5 dBm, with similar power consumption. Such performance is maintained from 2 to 10 closely-located WBANs coexisting. And the joint algorithm works best for one isolated WBAN. A trade-off between power saving and interference mitigation is motivated, taking remaining sensor-battery level, amount of interference and on-body channel quality into account. Index Terms—Cooperative communications, interference mit- igation, transmit power control, wireless body area networks.", "In this paper, coexistence of multiple mobile wireless body area networks (WBANs), where there is no coordination between WBANs, is investigated for the case where the WBAN-of-interest employs cooperative communications. A decode-and-forward protocol with two dual-hop links, two relays and selection combining (SC) at the hub (or gateway device) is chosen for the WBAN-of-interest. A suitable time-division-multiple-access (TDMA) scheme is used, enabling intra-network and inter-network operation, to allocate slots for each Tx Rx link packet transmission. Realistic channel models are employed with various amounts of shadowing, small-scale fading and white noise introduced between WBANs. For the WBAN-of-interest, many hours of measured channel gain data is employed to emulate the channel for this WBAN. It is found that the chosen cooperative communications provides significantly better co-channel interference mitigation than single-link star topology WBAN communications in a mobile, dynamic, scenario, hence the signal-to-interference-plus-noise ratio (SINR) for 10 outage probability at the hub is greatly improved by up-to 12 dB. It is also demonstrated that the location of the hub, given three typical locations, has significant impact on the performance of the cooperative WBAN communications." ] }
1602.08658
2292788438
In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.
On the other hand, other works prove that TDMA scheme is an attractive solution to avoid interference within an intra-WBAN. Authors of @cite_10 enables two or three coexisting WBANs to agree on a common TDMA schedule to reduce the interference. The work in @cite_1 adopts a TDMA polling-based scheme for traffic coordination inside a WBAN and a carrier sensing (CS) mechanism to deal with inter-WBAN interference.
{ "cite_N": [ "@cite_1", "@cite_10" ], "mid": [ "2087357817", "2032297769" ], "abstract": [ "This paper investigates the issue of interference mitigation in wireless body area networks (BANs). Although several approaches have been proposed in BAN standard IEEE 802.15.6, they increase transmission latency or energy cost, and do not mitigate interference effectively. In order to avoid both intra- and inter-BAN interference, we present a MAC protocol with two-layer interference mitigation (2L-MAC) for BANs. Considering the QoS requirements of BANs, the proposed protocol not only avoids packet collisions but also reduces transmission delay and energy consumption in sensors. Moreover, channel switching is triggered whenever a BAN needs to acquire more bandwidth. Simulation results show that our protocol outperforms other protocols in terms of delivery rate, latency and energy saving.", "A Wireless Body Area Network (WBAN) consists of different tiny physiological wearable sensors to monitor the vital signs of a human body and these sensor nodes transmit in real-time the sensed physiological information to the on-body coordinator or server over a wireless medium. Any radio frequency based wireless device suffers from interference due to the existence of other wireless devices operating in the same frequency band. In this paper we address the problem of interference when multiple WBANs come in the proximity of one another. We propose a TDMA-based solution that creates a common schedule between these WBANs, so that there is seamless communication with their sensors without interference among them, while increasing the cost of latency per node. Also, when these TDMA based WBANs come in the close proximity of one another, they sense the existence of other interfering WBANs. Our solution proposed a scheme that defines the time when to exchange their TDMA schedules. We simulated the proposed solution using NS-3 network simulator and it is observed that there is improvement in the percentage of packet delivery by using the proposed scheme. It is also observed that the increase of speed of WBANs leads to decrease in the percentage of packet delivery." ] }
1602.08658
2292788438
In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.
Other research focuses on the performance at the coordinator that calculates SINR periodically. This calculation enables C to command its nodes to select appropriate interferece mitigation scheme @cite_11 . Other studies of @cite_10 analyze the performance of a reference WBAN. They evaluate the performance in terms of bit error rate, throughput and lifetime which have been improved by adoption of an optimized time hopping code assignment strategy. Works in @cite_4 consider a WBAN where coordinator periodically queries sensors to transmit data. The network adopts the CSMA CA and the nodes adopt link adaptation to select the modulation scheme according to the experienced channel quality.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_11" ], "mid": [ "2032297769", "2035002549", "2073051949" ], "abstract": [ "A Wireless Body Area Network (WBAN) consists of different tiny physiological wearable sensors to monitor the vital signs of a human body and these sensor nodes transmit in real-time the sensed physiological information to the on-body coordinator or server over a wireless medium. Any radio frequency based wireless device suffers from interference due to the existence of other wireless devices operating in the same frequency band. In this paper we address the problem of interference when multiple WBANs come in the proximity of one another. We propose a TDMA-based solution that creates a common schedule between these WBANs, so that there is seamless communication with their sensors without interference among them, while increasing the cost of latency per node. Also, when these TDMA based WBANs come in the close proximity of one another, they sense the existence of other interfering WBANs. Our solution proposed a scheme that defines the time when to exchange their TDMA schedules. We simulated the proposed solution using NS-3 network simulator and it is observed that there is improvement in the percentage of packet delivery by using the proposed scheme. It is also observed that the increase of speed of WBANs leads to decrease in the percentage of packet delivery.", "In this paper we consider an IEEE 802.15.4-based Wireless Body Area Network, where wearable sensor devices are distributed on a body and have to send the measured data to a coordinator. The Carrier Sense Multiple Access with Collision Avoidance algorithm defined by the standard is used as Medium Access Control protocol, whereas different modulation schemes are assumed to be available at the physical layer. We propose a novel Link Adaptation (LA) strategy, where nodes select the modulation scheme according to the experienced channel quality and level of interference. The novelty lays in the fact that in case of large Signal-to-Noise Ratio and low Signal-to-Interference Ratio nodes increase the bit rate, instead of reducing it, as largely done in the works present in the literature. The reduction of the bit rate, in fact, allows to decrease the time the channel is occupied and, therefore, the collision probability. Performance is evaluated in terms of packet error rate and results achieved with and without LA are compared. Results show that the proposed strategy improves performance.", "Considering the medical nature of the information carried in body area networks (BANs), interference from coexisting wireless networks or even other nearby BANs could create serious problems on their operational reliability. As practical implementation of power control mechanisms could be very challenging, link adaptation schemes can be an efficient alternative to preserve link quality while allowing more number of nodes to operate simultaneously. This paper proposes several interference mitigation schemes such as adaptive modulation as well as adaptive data rate and duty cycle for BANs. Interference mitigation factor is introduced as a measure to quantify the effectiveness of the proposed schemes. These schemes are relatively simple and well-suited for low power nodes in BANs that might be operating in environments with high level of interference." ] }
1602.08658
2292788438
In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal.
The research work of @cite_8 solves the problem of inter-WBAN scheduling and interference by the adoption of a QoS based MAC preemptive priority scheduling approach. Whilst, researchers of @cite_2 proposes a distributed interference detection and mitigation scheme through using adaptive channel hopping. Whereas, research works of @cite_12 proposes a dynamic resource allocation scheme for interference avoidance among multiple coexisting WBANS through using orthogonal sub-channels for high interfering nodes.
{ "cite_N": [ "@cite_2", "@cite_12", "@cite_8" ], "mid": [ "", "2083602816", "2005620533" ], "abstract": [ "", "In this paper, a dynamic resource allocation scheme is proposed to avoid interference amongst coexisting Wireless Body Area Networks (WBAN). In the proposed scheme, each WBAN generates a table consisting of interfering nodes from coexisting WBANs in its vicinity. Then each WBAN broadcasts this table to its neighbors, which allows for efficient interpretation of an Interference Region (IR) between each pair of WBANs. The nodes in the IR are later allocated orthogonal sub-channels; whilst nodes that do not exist in the IR can potentially transmit in the same time interval. We further demonstrate a precise trade- off between the minimum interference level and spatial reuse. Simulation results show that our proposed scheme has far better spectral efficiency compared to the conventional orthogonal schemes, whilst maintaining an acceptable interference level. We also provide mathematical analysis on the proposed scheme to validate its efficiency for increasing spectral efficiency and avoiding interference. To further reduce the interference level, we propose a probabilistic approach, and analytically show that the outage probability can be effectively reduced at the cost of very small change in the spatial reuse factor. Index Terms—Wireless Body Area Networks, Spectral Effi- ciency, IEEE 802.15.6, Interference Mitigation", "Wireless Body Area Networks (WBANs) have the potential for extensive use in health care monitoring. To provide optimum network utilization, it is important to efficiently schedule multiple co-existing WBANs which could possibly suffer from high degree of interference. A graceful coexistence can be feasible by appropriately scheduling transmissions from different WBANs. In this paper, we use a recent standard of IEEE 802.15.6 (TG6), which has been finalized in May 2012, to address the problems of intra and inter-WBAN interference. This standard offers several advantages for medical applications as compared to IEEE 802.15.4 which has been previously adopted for numerous healthcare applications. In this paper, we propose a QoS based MAC scheduling approach to avoid inter-WBAN interference and introduce a fuzzy inference engine for intra-WBAN scheduling so as to avoid interference within WBANs." ] }
1602.08581
2289102908
Existing video indexing and retrieval methods on popular web-based multimedia sharing websites are based on user-provided sparse tagging. This paper proposes a very specific way of searching for video clips, based on the content of the video. We present our work on Content-based Video Indexing and Retrieval using the Correspondence-Latent Dirichlet Allocation (corr-LDA) probabilistic framework. This is a model that provides for auto-annotation of videos in a database with textual descriptors, and brings the added benefit of utilizing the semantic relations between the content of the video and text. We use the concept-level matching provided by corr-LDA to build correspondences between text and multimedia, with the objective of retrieving content with increased accuracy. In our experiments, we employ only the audio components of the individual recordings and compare our results with an SVM-based approach.
Video query by semantic keywords is one of the most difficult problems in multimedia data retrieval. The difficulty lies in the mapping between low-level video representation and high-level semantics. In @cite_19 , the multimedia content-access problem is formulated as a multimedia pattern recognition problem. A probabilistic framework is proposed to map the low-level video representation to high level semantics using the concepts of and their networks called .
{ "cite_N": [ "@cite_19" ], "mid": [ "2138043589" ], "abstract": [ "Semantic filtering and retrieval of multimedia content is crucial for efficient use of the multimedia data repositories. Video query by semantic keywords is one of the most difficult problems in multimedia data retrieval. The difficulty lies in the mapping between low-level video representation and high-level semantics. We therefore formulate the multimedia content access problem as a multimedia pattern recognition problem. We propose a probabilistic framework for semantic video indexing, which call support filtering and retrieval and facilitate efficient content-based access. To map low-level features to high-level semantics we propose probabilistic multimedia objects (multijects). Examples of multijects in movies include explosion, mountain, beach, outdoor, music etc. Semantic concepts in videos interact and to model this interaction explicitly, we propose a network of multijects (multinet). Using probabilistic models for six site multijects, rocks, sky, snow, water-body forestry greenery and outdoor and using a Bayesian belief network as the multinet we demonstrate the application of this framework to semantic indexing. We demonstrate how detection performance can be significantly improved using the multinet to take interconceptual relationships into account. We also show how the multinet can fuse heterogeneous features to support detection based on inference and reasoning." ] }
1602.08581
2289102908
Existing video indexing and retrieval methods on popular web-based multimedia sharing websites are based on user-provided sparse tagging. This paper proposes a very specific way of searching for video clips, based on the content of the video. We present our work on Content-based Video Indexing and Retrieval using the Correspondence-Latent Dirichlet Allocation (corr-LDA) probabilistic framework. This is a model that provides for auto-annotation of videos in a database with textual descriptors, and brings the added benefit of utilizing the semantic relations between the content of the video and text. We use the concept-level matching provided by corr-LDA to build correspondences between text and multimedia, with the objective of retrieving content with increased accuracy. In our experiments, we employ only the audio components of the individual recordings and compare our results with an SVM-based approach.
In recent years research has focused on the use of internal features of images and videos computed in an automated or semi-automated way @cite_1 . Automated analysis calculates statistics, which can be approximately correlated to the content features. This is useful as it provides information without costly human interaction.
{ "cite_N": [ "@cite_1" ], "mid": [ "2136185042" ], "abstract": [ "We propose an original approach for the characterization of video dynamic content with a view to supplying new functionalities for motion-based video indexing and retrieval with query by example. We have designed a statistical framework for motion content description without any prior motion segmentation, and for motion-based video classification and retrieval. Contrary to other proposed methods, we do not extract from a given video sequence a set of motion features but we identify a global probabilistic model, expressed as a temporal Gibbs random field. This leads to define a efficient statistical motion-based similarity measure, relying on the computation of conditional likelihoods, to discriminate various motion contents. We have carried out experiments on a set of 100 video sequences, representative of various motion situations (temporal textures as fire and crowd motions, sport videos, car sequences, low motion activity examples). We have obtained promising results both for the video classification step and for the retrieval process." ] }
1602.08581
2289102908
Existing video indexing and retrieval methods on popular web-based multimedia sharing websites are based on user-provided sparse tagging. This paper proposes a very specific way of searching for video clips, based on the content of the video. We present our work on Content-based Video Indexing and Retrieval using the Correspondence-Latent Dirichlet Allocation (corr-LDA) probabilistic framework. This is a model that provides for auto-annotation of videos in a database with textual descriptors, and brings the added benefit of utilizing the semantic relations between the content of the video and text. We use the concept-level matching provided by corr-LDA to build correspondences between text and multimedia, with the objective of retrieving content with increased accuracy. In our experiments, we employ only the audio components of the individual recordings and compare our results with an SVM-based approach.
The common strategy for automatic indexing had been based on using syntactic features alone. However, due to its complexity of operation, there has been a paradigm shift in the research concerned with identifying semantic features @cite_3 . User-friendly Content-Based Retrieval (CBR) systems operating at semantic level would identify motion-features as the key besides other features like color, objects etc., because motion (either of camera motion or shot editing) adds to the meaning of the content. The focus of present motion based systems had been mainly in identifying the principal object and performing retrieval-based on cues derived from such motion. With the objective of deriving semantic-level indices, it becomes important to deal with the learning tools. The learning phases followed by the classification phase are two common envisioned steps in CBR systems. Rather than the user mapping the features with semantic categories, the task could be shifted to the system to perform learning (or training) with pre-classified samples and determine the patterns in an effective manner. A concise review of these techniques is provided in @cite_9 @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_3" ], "mid": [ "2099736636", "2157937933", "" ], "abstract": [ "Video indexing and retrieval have a wide spectrum of promising applications, motivating the interest of researchers worldwide. This paper offers a tutorial and an overview of the landscape of general strategies in visual content-based video indexing and retrieval, focusing on methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, video retrieval including query interfaces, similarity measure and relevance feedback, and video browsing. Finally, we analyze future research directions.", "This study surveys current trends methods in video retrieval. The major themes covered by the study include shot segmentation, key frame extraction, feature extraction, clustering, indexing and video retrieval-by similarity, probabilistic, transformational, refinement and relevance feedback. This work has done in an aim to assist the upcoming researchers in the field of video retrieval, to know about the techniques and methods available for video retrieval.", "" ] }
1602.08581
2289102908
Existing video indexing and retrieval methods on popular web-based multimedia sharing websites are based on user-provided sparse tagging. This paper proposes a very specific way of searching for video clips, based on the content of the video. We present our work on Content-based Video Indexing and Retrieval using the Correspondence-Latent Dirichlet Allocation (corr-LDA) probabilistic framework. This is a model that provides for auto-annotation of videos in a database with textual descriptors, and brings the added benefit of utilizing the semantic relations between the content of the video and text. We use the concept-level matching provided by corr-LDA to build correspondences between text and multimedia, with the objective of retrieving content with increased accuracy. In our experiments, we employ only the audio components of the individual recordings and compare our results with an SVM-based approach.
In the past several researchers have considered the problem of building semantic relations or correspondences for modeling annotated data. We mention here several important contributions to this area. In @cite_20 authors investigate the problem of auto-annotation and region naming for images using Mixture of Multi-Modal Latent Dirichlet Allocation (MoM-LDA) and Multi-Modal Hierarchical Aspect Models. Use of Canonical correlation analysis (CCA) for joint modeling is proposed in @cite_13 and @cite_10 . The idea is to map both images and text into a common space and then define various similarity metrics. This can be used for both indexing and retrieval. Kernel CCA is used by @cite_10 where the problem is formulated as that of maximizing an affinity function for image-sentence pairs. Rank SVM based approach is proposed in @cite_12 where linear classifiers are trained for an image with relevant and irrelavant captions. Many other novel methods have been proposed for establishing relations between images and text @cite_14 @cite_15 @cite_18 @cite_17 . The Labeled LDA model is introduced by @cite_8 to solve the problem of credit attribution by learning word-tag correspondences.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_14", "@cite_8", "@cite_15", "@cite_20", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2106277773", "1584193343", "2125238156", "1969486090", "1897761818", "2137471889", "68733909", "2014137726", "2167407098" ], "abstract": [ "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.", "Learning visual classifiers for object recognition from weakly labeled data requires determining correspondence between image regions and semantic object classes. Most approaches use co-occurrence of \"nouns\" and image features over large datasets to determine the correspondence, but many correspondence ambiguities remain. We further constrain the correspondence problem by exploiting additional language constructs to improve the learning process from weakly labeled data. We consider both \"prepositions\" and \"comparative adjectives\" which are used to express relationships between objects. If the models of such relationships can be determined, they help resolve correspondence ambiguities. However, learning models of these relationships requires solving the correspondence problem. We simultaneously learn the visual features defining \"nouns\" and the differential visual features defining such \"binary-relationships\" using an EM-based approach.", "A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning", "A significant portion of the world's text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one correspondence between LDA's latent topics and user tags. This allows Labeled LDA to directly learn word-tag correspondences. We demonstrate Labeled LDA's improved expressiveness over traditional LDA with visualizations of a corpus of tagged web pages from del.icio.us. Labeled LDA outperforms SVMs by more than 3 to 1 when extracting tag-specific document snippets. As a multi-label text classifier, our model is competitive with a discriminative baseline on a variety of datasets.", "Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.", "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann's hierarchical clustering aspect model, a translation model adapted from statistical machine translation (), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.", "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.", "Associating photographs with complete sentences that describe what is depicted in them is a challenging problem. This paper examines how an approach that is inspired by image tagging techniques which can scale to very large data sets performs on this much harder task, and examines some of the linguistic difficulties that this bag-of-words model faces.", "The last decade has witnessed great interest in research on content-based image retrieval. This has paved the way for a large number of new techniques and systems, and a growing interest in associated fields to support such systems. Likewise, digital imagery has expanded its horizon in many directions, resulting in an explosion in the volume of image data required to be organized. In this paper, we discuss some of the key contributions in the current decade related to image retrieval and automated image annotation, spanning 120 references. We also discuss some of the key challenges involved in the adaptation of existing image retrieval techniques to build useful systems that can handle real-world data. We conclude with a study on the trends in volume and impact of publications in the field with respect to venues journals and sub-topics." ] }
1602.08581
2289102908
Existing video indexing and retrieval methods on popular web-based multimedia sharing websites are based on user-provided sparse tagging. This paper proposes a very specific way of searching for video clips, based on the content of the video. We present our work on Content-based Video Indexing and Retrieval using the Correspondence-Latent Dirichlet Allocation (corr-LDA) probabilistic framework. This is a model that provides for auto-annotation of videos in a database with textual descriptors, and brings the added benefit of utilizing the semantic relations between the content of the video and text. We use the concept-level matching provided by corr-LDA to build correspondences between text and multimedia, with the objective of retrieving content with increased accuracy. In our experiments, we employ only the audio components of the individual recordings and compare our results with an SVM-based approach.
We propose to use an extension of LDA @cite_2 called the Correspondence-LDA, familiarized by Blei and Jordan @cite_5 to bridge the gap between low level video representation and high level semantics comprehensively. Our approach is significantly different from the discussed approaches because we model the problem in the probabilistic framework where both captions and videos are said to be generated from a set of topics. Moreover, we use a bag-of-words representation for both video content and text. Particularly, we differ from Blei's usage of Corr-LDA for image annotation and retrieval in the following two aspects:
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "2020842694", "1880262756" ], "abstract": [ "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1602.08323
2283854151
We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset. Neurons only update their states when receiving signals from other neurons. Total computation of the network thus scales with the number of spikes caused by an input rather than network size. We show that the spiking Multi-Layer Perceptron behaves identically, during both prediction and training, to a conventional deep network of rectified-linear units, in the limiting case where we run the spiking network for a long time. We apply this architecture to a conventional classification problem (MNIST) and achieve performance very close to that of a conventional Multi-Layer Perceptron with the same architecture. Our network is a natural architecture for learning based on streaming event-based data, and is a stepping stone towards using spiking neural networks to learn efficiently on streaming data.
Spiking isn't the only form of discretization. @cite_10 achieved impressive results by devising a scheme for sending back an approximate error gradient in a deep neural network using only low-precision (discrete) values, and additionally found that the discretization served as a good regularizer. Our approach (and spiking approaches in general) differ from this in that they sequentially compute the inputs over time, so that it is not necessary to have finished processing all the information in a given input to make a prediction.
{ "cite_N": [ "@cite_10" ], "mid": [ "2963114950" ], "abstract": [ "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN." ] }
1602.08290
2522584862
Carrier-sense multiple access collision avoidance networks have often been analyzed using a stylized model that is fully characterized by a vector of back-off rates and a conflict graph. Furthermore, for any achievable throughput vector @math , the existence of a unique vector @math of back-off rates that achieves this throughput vector was proven. Although this unique vector can in principle be computed iteratively, the required time complexity grows exponentially in the network size, making this only feasible for small networks. In this paper, we present an explicit formula for the unique vector of back-off rates @math needed to achieve any achievable throughput vector @math provided that the network has a chordal conflict graph. This class of networks contains a number of special cases of interest such as (inhomogeneous) line networks and networks with an acyclic conflict graph. Moreover, these back-off rates are such that the back-off rate of a node only depends on its own target throughput and the target throughput of its neighbors and can be determined in a distributed manner. We further indicate that back-off rates of this form cannot be obtained in general for networks with non-chordal conflict graphs. For general conflict graphs, we nevertheless show how to adapt the back-off rates when a node is added to the network when its interfering nodes form a clique in the conflict graph. Finally, we introduce a distributed chordal approximation algorithm for general conflict graphs, which is shown (using numerical examples) to be more accurate than the Bethe approximation.
The well-known product form to analyze the throughput of idealized CSMA CA networks with a general conflict graph was first introduced in @cite_13 , where its insensitivity with respect to the packet length distribution was also shown. Insensitivity with respect to the length of the back-off period was proven much later in @cite_30 @cite_3 .
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_3" ], "mid": [ "2125890347", "2029650413", "2096448929" ], "abstract": [ "Random-access algorithms such as the Carrier-Sense Multiple-Access (CSMA) protocol provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years, fairly tractable models have been shown to yield remarkably accurate throughput estimates for CSMA networks. These models typically assume that both the transmission durations and the back-off periods are exponentially distributed. We show that the stationary distribution of the system is in fact insensitive with respect to the transmission durations and the back-off times. These models primarily pertain to a saturated scenario where nodes always have packets to transmit. In reality however, the buffers may occasionally be empty as packets are randomly generated and transmitted over time. The resulting interplay between the activity states and the buffer contents gives rise to quite complicated queueing dynamics, and even establishing the stability criteria is usually a serious challenge. We explicitly identify the stability conditions in a few relevant scenarios, and illustrate the difficulties arising in other cases.", "In this paper, we use a Markov model to develop a product form solution to efficiently analyze the throughput of arbitrary topology multihop packet radio networks that employ a carrier sensing multiple access (CSMA) protocol with perfect capture. We consider both exponential and nonexponential packet length distributions. Our method preserves the dependence between nodes, characteristic of CSMA, and determines the joint probability that nodes are transmitting. The product form analysis provides the basis for an automated algorithm that determines the maximum throughput in networks of size up to 100 radio nodes. Numerical examples for several networks are presented. This model has led to many theoretical and practical extensions. These include determination of conditions for product form analysis to hold, extension to other access protocols, and consideration of acknowledgments.", "This work started out with our discovery of a pattern of throughput distributions among links in IEEE 802.11 networks from experimental results. This pattern gives rise to an easy computation method, which we term back-of-the-envelop (BoE) computation. For many network configurations, very accurate results can be obtained by BoE within minutes, if not seconds, by simple hand computation. This allows us to make shortcuts in performance evaluation, bypassing complicated stochastic analysis. To explain BoE, we construct a theory based on the model of an “ideal CSMA network” (ICN). The BoE computation method emerges from ICN when we take the limit c → 0, where c is the ratio of the mean backoff countdown time to the mean transmission time in the CSMA protocol. Importantly, we derive a new mathematical result: the link throughputs of ICN are insensitive to the distributions of the backoff countdown time and transmission time (packet duration) given the ratio of their means c. This insensitivity result explains why BoE works so well for practical 802.11 networks, in which the backoff countdown process is one that has memory, and in which the packet size can be arbitrarily distributed. Our results indicate that BoE is a good approximation technique for modest-size networks such as those typically seen in 802.11 deployments. Beyond explaining BoE, the theoretical framework of ICN is also a foundation for fundamental understanding of very-large-scale CSMA networks. In particular, ICN is similar to the Ising model in statistical physics used to explain phenomena arising out of the interactions of a large number of entities. Many new research directions arise out of the ICN model." ] }
1602.08290
2522584862
Carrier-sense multiple access collision avoidance networks have often been analyzed using a stylized model that is fully characterized by a vector of back-off rates and a conflict graph. Furthermore, for any achievable throughput vector @math , the existence of a unique vector @math of back-off rates that achieves this throughput vector was proven. Although this unique vector can in principle be computed iteratively, the required time complexity grows exponentially in the network size, making this only feasible for small networks. In this paper, we present an explicit formula for the unique vector of back-off rates @math needed to achieve any achievable throughput vector @math provided that the network has a chordal conflict graph. This class of networks contains a number of special cases of interest such as (inhomogeneous) line networks and networks with an acyclic conflict graph. Moreover, these back-off rates are such that the back-off rate of a node only depends on its own target throughput and the target throughput of its neighbors and can be determined in a distributed manner. We further indicate that back-off rates of this form cannot be obtained in general for networks with non-chordal conflict graphs. For general conflict graphs, we nevertheless show how to adapt the back-off rates when a node is added to the network when its interfering nodes form a clique in the conflict graph. Finally, we introduce a distributed chordal approximation algorithm for general conflict graphs, which is shown (using numerical examples) to be more accurate than the Bethe approximation.
In @cite_10 the fairness of large CSMA CA networks was studied and for regular networks (lines and grids) conditions on when the unfairness propagates within the network were presented. The cause of unfairness in CSMA CA networks was further analyzed in @cite_1 , where the equality of the receiving and sensing ranges was identified as an important cause. Fairness in CSMA CA line networks was also studied in @cite_20 , where an explicit formula was presented to achieve fairness in a line network consisting of @math nodes, where each node interferes with the next and previous @math nodes. The existence of a unique vector of back-off rates to achieve any achievable throughput vector was established in @cite_26 , which also discusses several iterative algorithms to compute this vector.
{ "cite_N": [ "@cite_1", "@cite_26", "@cite_10", "@cite_20" ], "mid": [ "2166319816", "2054671542", "2003813460", "2158564748" ], "abstract": [ "Decentralized medium access control schemes for wireless networks based on CSMA CA, such as the IEEE 802.11 protocol, are known to be unfair. In multihop networks, they can even favor some links to such an extent that the others suffer from virtually complete starvation. This observation has been reported in quite a few works, but the factors causing it are still not well understood. We find that the capture effect and the relative values of the receive and carrier sensing ranges play a crucial role in the performance of these protocols. Using a simple Markovian model, we show that an idealized CSMA CA protocol suffers from starvation when the receiving and sensing ranges are equal, but quite surprisingly that this unfairness is reduced or even disappears when these two ranges are sufficiently different. We also show that starvation has a positive counterpart, namely organization. When its access intensity is large the protocol organizes the transmissions in space in such a way that it maximizes the number of concurrent successful transmissions. We obtain exact formula for the so-called spatial reuse of the protocol on large line networks.", "Random-access algorithms such as CSMA provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years, tractable stochastic models have been shown to yield accurate throughput estimates for CSMA networks. We consider a saturated random-access network on a general conflict graph, and prove that for every feasible combination of throughputs, there exists a unique vector of back-off rates that achieves this throughput vector. This result entails proving global invertibility of the non-linear function that describes the throughputs of all nodes in the network. We present several numerical procedures for calculating this inverse, based on fixed-point iteration and Newton's method. Finally, we provide closed-form results for several special conflict graphs using the theory of Markov random fields.", "We characterize the fairness of decentralized medium access control protocols based on CSMA CA, in large multi-hop wireless networks. In particular, we show that the widely observed unfairness of these protocols in small network topologies does not always persist in large topologies. In regular networks, this unfairness is essentially due to the unfair advantage of nodes at the border of the network, which have a restricted neighborhood and thus a higher probability to access the communication channel. In large 1D lattice networks these border effects do not propagate inside the network, and nodes sufficiently far away from the border have equal access to the channel; as a result the protocol is long-term fair. In 2D lattice networks, we observe a phase transition. If the access intensity of the protocol is small, the border effects remain local and the protocol behaves similarly as in one-dimensional networks. However, if the access intensity of the protocol is large enough, the border effects persist independently of the size of the network and the protocol is strongly unfair. In irregular networks, the topology is inherently unfair. This unfairness increases with the access intensity of the protocol, but in a much smoother way than in regular two-dimensional networks. Finally, in situations where the protocol is long-term fair, we provide a characterization of its short-term fairness.", "Random-access networks may exhibit severe unfairness in throughput, in the sense that some nodes receive consistently higher throughput than others. Recent studies show that this unfairness is due to local differences in the neighborhood structure: nodes with fewer neighbors receive better access. We study the unfairness in saturated linear networks, and adapt the random-access CSMA protocol to remove the unfairness completely, by choosing the activation rates of nodes as a specific function of the number of neighbors. We then investigate the consequences of this choice of activation rates on the network-average saturated throughput, and we show that these rates perform well in non-saturated settings." ] }
1602.08290
2522584862
Carrier-sense multiple access collision avoidance networks have often been analyzed using a stylized model that is fully characterized by a vector of back-off rates and a conflict graph. Furthermore, for any achievable throughput vector @math , the existence of a unique vector @math of back-off rates that achieves this throughput vector was proven. Although this unique vector can in principle be computed iteratively, the required time complexity grows exponentially in the network size, making this only feasible for small networks. In this paper, we present an explicit formula for the unique vector of back-off rates @math needed to achieve any achievable throughput vector @math provided that the network has a chordal conflict graph. This class of networks contains a number of special cases of interest such as (inhomogeneous) line networks and networks with an acyclic conflict graph. Moreover, these back-off rates are such that the back-off rate of a node only depends on its own target throughput and the target throughput of its neighbors and can be determined in a distributed manner. We further indicate that back-off rates of this form cannot be obtained in general for networks with non-chordal conflict graphs. For general conflict graphs, we nevertheless show how to adapt the back-off rates when a node is added to the network when its interfering nodes form a clique in the conflict graph. Finally, we introduce a distributed chordal approximation algorithm for general conflict graphs, which is shown (using numerical examples) to be more accurate than the Bethe approximation.
In @cite_17 @cite_19 the set of achievable throughput vectors of an ideal CSMA CA network was identified and a dynamic algorithm to set the back-off rates was proposed that was proven to be throughput-optimal. Generalizations of this algorithm in a setting with packet collisions were considered in @cite_8 . An simple approximate algorithm to set the back-off rates to achieve a given target throughput vector that requires only a single iteration was presented in @cite_6 .
{ "cite_N": [ "@cite_19", "@cite_8", "@cite_6", "@cite_17" ], "mid": [ "2120078542", "2158384898", "1539042193", "2097058227" ], "abstract": [ "This paper provides proofs of the rate stability, Harris recurrence, and e-optimality of carrier sense multiple access (CSMA) algorithms where the random access (or backoff) parameter of each node is adjusted dynamically. These algorithms require only local information and they are easy to implement. The setup is a network of wireless nodes with a fixed conflict graph that identifies pairs of nodes whose simultaneous transmissions conflict. The paper studies two algorithms. The first algorithm schedules transmissions to keep up with given arrival rates of packets. The second algorithm controls the arrivals in addition to the scheduling and attempts to maximize the sum of the utilities, in terms of the rates, of the packet flows at different nodes. For the first algorithm, the paper proves rate stability for strictly feasible arrival rates and also Harris recurrence of the queues. For the second algorithm, the paper proves the e-optimality in terms of the utilities of the allocated rates. Both algorithms are iterative and we study two versions of each of them. In the first version, both operate with strictly local information but have relatively weaker performance guarantees; under the second version, both provide stronger performance guarantees by utilizing the additional information of the number of nodes in the network.", "It was shown recently that carrier sense multiple access (CSMA)-like distributed algorithms can achieve the maximal throughput in wireless networks (and task processing networks) under certain assumptions. One important but idealized assumption is that the sensing time is negligible, so that there is no collision. In this paper, we study more practical CSMA-based scheduling algorithms with collisions. First, we provide a Markov chain model and give an explicit throughput formula that takes into account the cost of collisions and overhead. The formula has a simple form since the Markov chain is \"almost\" time-reversible. Second, we propose transmission-length control algorithms to approach throughput-optimality in this case. Sufficient conditions are given to ensure the convergence and stability of the proposed algorithms. Finally, we characterize the relationship between the CSMA parameters (such as the maximum packet lengths) and the achievable capacity region.", "Carrier sense multiple access (CSMA), which resolves contentions over wireless networks in a fully distributed fashion, has recently gained a lot of attentions since it has been proved that appropriate control of CSMA parameters guarantees optimality in terms of stability (i.e., scheduling) and system-wide utility (i.e., scheduling and congestion control). Most CSMA-based algorithms rely on the popular Markov chain Monte Carlo technique, which enables one to find optimal CSMA parameters through iterative loops of simulation-and-update. However, such a simulation-based approach often becomes a major cause of exponentially slow convergence, being poorly adaptive to flow topology changes. In this paper, we develop distributed iterative algorithms which produce approximate solutions with convergence in polynomial time for both stability and utility maximization problems. In particular, for the stability problem, the proposed distributed algorithm requires, somewhat surprisingly, only one iteration among links. Our approach is motivated by the Bethe approximation (introduced by Yedidia, Freeman, and Weiss) allowing us to express approximate solutions via a certain nonlinear system with polynomial size. Our polynomial convergence guarantee comes from directly solving the nonlinear system in a distributed manner, rather than multiple simulation-and-update loops in existing algorithms. We provide numerical results to show that the algorithm produces highly accurate solutions and converges much faster than the prior ones.", "In multihop wireless networks, designing distributed scheduling algorithms to achieve the maximal throughput is a challenging problem because of the complex interference constraints among different links. Traditional maximal-weight scheduling (MWS), although throughput-optimal, is difficult to implement in distributed networks. On the other hand, a distributed greedy protocol similar to IEEE 802.11 does not guarantee the maximal throughput. In this paper, we introduce an adaptive carrier sense multiple access (CSMA) scheduling algorithm that can achieve the maximal throughput distributively. Some of the major advantages of the algorithm are that it applies to a very general interference model and that it is simple, distributed, and asynchronous. Furthermore, the algorithm is combined with congestion control to achieve the optimal utility and fairness of competing flows. Simulations verify the effectiveness of the algorithm. Also, the adaptive CSMA scheduling is a modular MAC-layer algorithm that can be combined with various protocols in the transport layer and network layer. Finally, the paper explores some implementation issues in the setting of 802.11 networks." ] }
1602.08290
2522584862
Carrier-sense multiple access collision avoidance networks have often been analyzed using a stylized model that is fully characterized by a vector of back-off rates and a conflict graph. Furthermore, for any achievable throughput vector @math , the existence of a unique vector @math of back-off rates that achieves this throughput vector was proven. Although this unique vector can in principle be computed iteratively, the required time complexity grows exponentially in the network size, making this only feasible for small networks. In this paper, we present an explicit formula for the unique vector of back-off rates @math needed to achieve any achievable throughput vector @math provided that the network has a chordal conflict graph. This class of networks contains a number of special cases of interest such as (inhomogeneous) line networks and networks with an acyclic conflict graph. Moreover, these back-off rates are such that the back-off rate of a node only depends on its own target throughput and the target throughput of its neighbors and can be determined in a distributed manner. We further indicate that back-off rates of this form cannot be obtained in general for networks with non-chordal conflict graphs. For general conflict graphs, we nevertheless show how to adapt the back-off rates when a node is added to the network when its interfering nodes form a clique in the conflict graph. Finally, we introduce a distributed chordal approximation algorithm for general conflict graphs, which is shown (using numerical examples) to be more accurate than the Bethe approximation.
Several generalization of the ideal CSMA CA model have been studied. These include linear networks with hidden and exposed nodes @cite_2 , single and multihop networks with unsaturated users @cite_25 and networks relying on multiple channels @cite_11 @cite_15 , where it should be noted that the stability conditions for the unsaturated network presented in @cite_25 are not valid in general @cite_32 .
{ "cite_N": [ "@cite_32", "@cite_2", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2118762973", "1993630168", "1524979472", "2002627759", "2044168254" ], "abstract": [ "Abstract Random-access algorithms such as the Carrier-Sense Multiple-Access (CSMA) protocol provide a popular mechanism for distributed medium access control in large-scale wireless networks. In recent years fairly tractable models have been shown to yield remarkably accurate throughput estimates in scenarios with saturated buffers. In contrast, in non-saturated scenarios, where nodes refrain from competition for the medium when their buffers are empty, a complex two-way interaction arises between the activity states and the buffer contents of the various nodes. As a result, the throughput characteristics in such scenarios have largely remained elusive so far. In the present paper we provide a generic structural characterization of the throughput performance and corresponding stability region in terms of the individual saturation throughputs of the various nodes. While the saturation throughputs are difficult to explicitly determine in general, we identify certain cases where these values can be expressed in closed form. In addition, we demonstrate that various lower-dimensional facets of the stability region can be explicitly calculated as well, depending on the neighborhood structure of the interference graph. Illustrative examples and numerical results are presented to illuminate the main analytical findings.", "Wireless networks equipped with the CSMA protocol are subject to collisions due to interference. For a given interference range, we investigate the tradeoff between collisions (hidden nodes) and unused capacity (exposed nodes). We show that the sensing range that maximizes throughput critically depends on the activation rate of nodes. For infinite line networks, we prove the existence of a threshold: When the activation rate is below this threshold, the optimal sensing range is small (to maximize spatial reuse). When the activation rate is above the threshold, the optimal sensing range is just large enough to preclude all collisions. Simulations suggest that this threshold policy extends to more complex linear and nonlinear topologies.", "This paper proposes and analyzes the performance of a simple frequency-agile CSMA MAC protocol. In this MAC, a node carrier-senses multiple frequency channels simultaneously, and it takes the first opportunity to transmit on any one of the channels when allowed by the CSMA backoff mechanism. We show that the frequency-agile MAC can effectively 1) boost throughput and 2) remove temporal starvation. Furthermore, the MAC can be implemented on the existing multiple-frequency setup in Wi-Fi using multi-radio technology, and it can co-exist with the legacy MAC using single radio. This paper provides exact stationary throughput analysis for regular 1D and thin-strip 2D CSMA networks using a \"transfer-matrix\" approach. In addition, accurate approximations are given for 2D grid networks. Our closed-form formulas accurately quantify the throughput gain of frequency-agile CSMA. To characterize temporal starvation, we use the metric of \"mean residual access time\" (MRAT). Our simulations and closed-form approximations indicate that the frequency-agile MAC can totally eliminate temporal starvation in 2D grid networks, reducing its MRAT by orders of magnitude. Finally, this paper presents a \"coloring theorem\" to justify the use of the frequency-agile MAC in general network topologies. Our analysis and theorem suggest that with enough frequency channels, the frequency-agile MAC can effectively decouple the detrimental interactions between neighboring links responsible for low throughput and starvation.", "Due to a poor understanding of the interactions among transmitters, wireless multihop networks have commonly been stigmatized as unpredictable in nature. Even elementary questions regarding the throughput limitations of these networks cannot be answered in general. In this paper we investigate the behavior of wireless multihop networks using carrier sense multiple access with collision avoidance (CSMA CA). Our goal is to understand how the transmissions of a particular node affect the medium access, and ultimately the throughput, of other nodes in the network. We introduce a theory which accurately models the behavior of these networks and show that, contrary to popular belief, their performance is easily predictable and can be described by a system of equations. Using the proposed theory, we provide the analytical expressions necessary to fully characterize the capacity region of any wireless CSMA CA multihop network. We show that this region is nonconvex in general and entirely agnostic to the probability distributions of all network parameters, depending only on their expected values.", "We analyze the performance of CSMA in multi-channel wireless networks, accounting for the random nature of traffic. Specifically, we assess the ability of CSMA to fully utilize the radio resources and in turn to stabilize the network in a dynamic setting with flow arrivals and departures. We prove that CSMA is optimal in the ad-hoc mode, when each flow goes through a unique dedicated wireless link from a transmitter to a receiver. It is generally suboptimal in infrastructure mode, when all data flows originate from or are destined to the same set of access points, due to the inherent bias of CSMA against downlink traffic. We propose a slight modification of CSMA that we refer to as flow-aware CSMA, which corrects this bias and makes the algorithm optimal in all cases. The analysis is based on some time-scale separation assumption which is proved valid in the limit of large flow sizes." ] }
1602.08290
2522584862
Carrier-sense multiple access collision avoidance networks have often been analyzed using a stylized model that is fully characterized by a vector of back-off rates and a conflict graph. Furthermore, for any achievable throughput vector @math , the existence of a unique vector @math of back-off rates that achieves this throughput vector was proven. Although this unique vector can in principle be computed iteratively, the required time complexity grows exponentially in the network size, making this only feasible for small networks. In this paper, we present an explicit formula for the unique vector of back-off rates @math needed to achieve any achievable throughput vector @math provided that the network has a chordal conflict graph. This class of networks contains a number of special cases of interest such as (inhomogeneous) line networks and networks with an acyclic conflict graph. Moreover, these back-off rates are such that the back-off rate of a node only depends on its own target throughput and the target throughput of its neighbors and can be determined in a distributed manner. We further indicate that back-off rates of this form cannot be obtained in general for networks with non-chordal conflict graphs. For general conflict graphs, we nevertheless show how to adapt the back-off rates when a node is added to the network when its interfering nodes form a clique in the conflict graph. Finally, we introduce a distributed chordal approximation algorithm for general conflict graphs, which is shown (using numerical examples) to be more accurate than the Bethe approximation.
Another line of related work, inspired by maximum weight scheduling @cite_23 , considers adapting the transmission lengths based on the current queue length of a node @cite_27 @cite_28 , that is, backoff periods and packets have mean length @math and after a backoff period or packet transmission a node transmits a packet with some probability that depends on a weight function of its queue length (provided that its neighbors are sensed silent). The main observation was that a slowly changing weight function is necessary for stability and among such functions a slower function leads to a more stable network at the cost of increased queue-sizes (and delay). Other queue-length based CSMA CA algorithms that were shown to be throughput-optimal in some setting include @cite_7 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_28", "@cite_27", "@cite_23" ], "mid": [ "2152859483", "2542265429", "2142178125", "2125141803", "2105177639" ], "abstract": [ "Recently, it has been shown that carrier-sense multiple access (CSMA)-type random access algorithms can achieve the maximum possible throughput in ad hoc wireless networks. However, these algorithms assume an idealized continuous-time CSMA protocol where collisions can never occur. In addition, simulation results indicate that the delay performance of these algorithms can be quite bad. On the other hand, although some simple heuristics (such as greedy maximal scheduling) can yield much better delay performance for a large set of arrival rates, in general they may only achieve a fraction of the capacity region. In this paper, we propose a discrete-time version of the CSMA algorithm. Central to our results is a discrete-time distributed randomized algorithm that is based on a generalization of the so-called Glauber dynamics from statistical physics, where multiple links are allowed to update their states in a single timeslot. The algorithm generates collision-free transmission schedules while explicitly taking collisions into account during the control phase of the protocol, thus relaxing the perfect CSMA assumption. More importantly, the algorithm allows us to incorporate heuristics that lead to very good delay performance while retaining the throughput-optimality property.", "We propose and analyze a distributed backlog-based CSMA policy to achieve fairness and throughput-optimality in wireless multihop networks. The analysis is based on a CSMA fixed point approximation that is accurate for large networks with many small flows and a small sensing period.", "We use fluid limits to explore the (in)stability properties of wireless networks with queue-based random-access algorithms. Queue-based random-access schemes are simple and inherently distributed in nature, yet provide the capability to match the optimal throughput performance of centralized scheduling mechanisms in a wide range of scenarios. Unfortunately, the type of activation rules for which throughput optimality has been established, may result in excessive queue lengths and delays. The use of more aggressive persistent access schemes can improve the delay performance, but does not offer any universal maximum-stability guarantees. In order to gain qualitative insight and investigate the (in)stability properties of more aggressive persistent activation rules, we examine fluid limits where the dynamics are scaled in space and time. In some situations, the fluid limits have smooth deterministic features and maximum stability is maintained, while in other scenarios they exihibit random oscillatory characteristics, giving rise to major technical challenges. In the latter regime, more aggressive access schemes continue to provide maximum stability in some networks, but may cause instability in others. In order to prove that, we focus on a particular network example and conduct a detailed analysis of the fluid limit process for the associated Markov chain. Specifically, we develop a novel approach based on stopping time sequences to deal with the switching probabilities governing the sample paths of the fluid limit process. Simulation experiments are conducted to illustrate and val- idate the analytical results. Keywords: Carrier-Sense Multiple Access (CSMA), fluid limits, queue-based strategies, stability issues.", "The popularity of Aloha-like algorithms for resolution of contention between multiple entities accessing common resources is due to their extreme simplicity and distributed nature. Example applications of such algorithms include Ethernet and recently emerging wireless multi-access networks. Despite a long and exciting history of more than four decades, the question of designing an algorithm that is essentially as simple and distributed as Aloha while being efficient has remained unresolved. In this paper, we resolve this question successfully for a network of queues where contention is modeled through independent-set constraints over the network graph. The work by Tassiulas and Ephremides (1992) suggests that an algorithm that schedules queues so that the summation of weight' of scheduled queues is maximized, subject to constraints, is efficient. However, implementing such an algorithm using Aloha-like mechanism has remained a mystery. We design such an algorithm building upon a Metropolis-Hastings sampling mechanism along with selection of weight' as an appropriate function of the queue-size. The key ingredient in establishing the efficiency of the algorithm is a novel adiabatic-like theorem for the underlying queueing network, which may be of general interest in the context of dynamical systems.", "The stability of a queueing network with interdependent servers is considered. The dependency among the servers is described by the definition of their subsets that can be activated simultaneously. Multihop radio networks provide a motivation for the consideration of this system. The problem of scheduling the server activation under the constraints imposed by the dependency among servers is studied. The performance criterion of a scheduling policy is its throughput that is characterized by its stability region, that is, the set of vectors of arrival and service rates for which the system is stable. A policy is obtained which is optimal in the sense that its stability region is a superset of the stability region of every other scheduling policy, and this stability region is characterized. The behavior of the network is studied for arrival rates that lie outside the stability region. Implications of the results in certain types of concurrent database and parallel processing systems are discussed. >" ] }
1602.08290
2522584862
Carrier-sense multiple access collision avoidance networks have often been analyzed using a stylized model that is fully characterized by a vector of back-off rates and a conflict graph. Furthermore, for any achievable throughput vector @math , the existence of a unique vector @math of back-off rates that achieves this throughput vector was proven. Although this unique vector can in principle be computed iteratively, the required time complexity grows exponentially in the network size, making this only feasible for small networks. In this paper, we present an explicit formula for the unique vector of back-off rates @math needed to achieve any achievable throughput vector @math provided that the network has a chordal conflict graph. This class of networks contains a number of special cases of interest such as (inhomogeneous) line networks and networks with an acyclic conflict graph. Moreover, these back-off rates are such that the back-off rate of a node only depends on its own target throughput and the target throughput of its neighbors and can be determined in a distributed manner. We further indicate that back-off rates of this form cannot be obtained in general for networks with non-chordal conflict graphs. For general conflict graphs, we nevertheless show how to adapt the back-off rates when a node is added to the network when its interfering nodes form a clique in the conflict graph. Finally, we introduce a distributed chordal approximation algorithm for general conflict graphs, which is shown (using numerical examples) to be more accurate than the Bethe approximation.
While throughput-optimality is very desirable, some of these queue-based algorithms have poor delay characteristics @cite_24 , which resulted in the design of (order) delay optimal CSMA algorithms @cite_33 @cite_31 . Finally, there is also a large body of work on CSMA CA networks in the context of 802.11 networks that was initiated by the seminal work in @cite_5 , which we do not discuss here.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_31", "@cite_33" ], "mid": [ "2587466736", "2162598825", "2120024772", "2047087115" ], "abstract": [ "", "The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.", "In this paper, we consider CSMA policies for scheduling packet transmissions in multihop wireless networks with one-hop traffic. The main contribution of the paper is to propose a novel CSMA policy, called Unlocking CSMA (U-CSMA), that enables to obtain both high throughput and low packet delays in large wireless networks. More precisely, we show that for torus interference graph topologies with one-hop traffic, U-CSMA is throughput optimal and achieves order-optimal delay. For one-hop traffic, the delay performance is defined to be order-optimal if the delay stays bounded as the network-size increases. Simulations that we conducted suggest that (a) U-CSMA is throughput-optimal and achieves order-optimal delay for general geometric interference graphs and (b) that U-CSMA can be combined with congestion control algorithms to maximize the network-wide utility and obtain order-optimal delay. To the best of our knowledge, this is the first time that a simple distributed scheduling policy has been proposed that is both throughput utility optimal and achieves order-optimal delay.", "In the past year or so, an exciting progress has led to throughput optimal design of CSMA-based algorithms for wireless networks. However, such an algorithm suffers from very poor delay performance. A recent work suggests that it is impossible to design a CSMA-like simple algorithm that is throughput optimal and induces low delay for any wireless network. However, wireless networks arising in practice are formed by nodes placed, possibly arbitrarily, in some geographic area. In this paper, we propose a CSMA algorithm with per-node average-delay bounded by a constant, independent of the network size, when the network has geometry (precisely, polynomial growth structure) that is present in any practical wireless network. Two novel features of our algorithm, crucial for its performance, are (a) choice of access probabilities as an appropriate function of queue-sizes, and (b) use of local network topological structures. Essentially, our algorithm is a queue-based CSMA with a minor difference that at each time instance a very small fraction of frozen nodes do not execute CSMA. Somewhat surprisingly, appropriate selection of such frozen nodes, in a distributed manner, lead to the delay optimal performance." ] }
1602.08127
2278991086
Binary codes can be used to speed up nearest neighbor search tasks in large scale data sets as they are efficient for both storage and retrieval. In this paper, we propose a robust auto-encoder model that preserves the geometric relationships of high-dimensional data sets in Hamming space. This is done by considering a noise-removing function in a region surrounding the manifold where the training data points lie. This function is defined with the property that it projects the data points near the manifold into the manifold wisely, and we approximate this function by its first order approximation. Experimental results show that the proposed method achieves better than state-of-the-art results on three large scale high dimensional data sets.
Recent efforts for dealing with noisy data are the denoising auto-encoders ( DAEs ) @cite_36 and contractive auto-encoders ( CAEs ) @cite_14 . The main starting points of these algorithms are that the learned features in the hidden layer keep the important intrinsic structure of the original data set while unimportant information, such as the noise, is discarded as much as possible. In DAEs , noise is manually added to the training data points; since we know the correspondence between the noisy and clean versions, a constraint is introduced to minimize the gap between these two versions according to some loss function. In CAEs , the Jacobian norm of the function, which maps the input data points to the hidden layer, is minimized for a contractive effect. In the extreme case, CAEs may contract all of the data points in the original space to a single point. This constraint is used to discard noise, but the distance between the input and output points is also considered. In the balance of these two constraints, data on the manifold will be unchanged and data outside the manifold will contract to the manifold.
{ "cite_N": [ "@cite_36", "@cite_14" ], "mid": [ "2145094598", "2218318129" ], "abstract": [ "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising auto-encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pretraining." ] }
1602.08461
2287880971
Delay and Disruption Tolerant Networks (DTNs) may lack continuous network connectivity. Routing in DTNs is thus a challenge since it must handle network partitioning, long delays, and dynamic topology. Meanwhile, routing protocols of the traditional Mobile Ad hoc NETworks (MANETs) cannot work well due to the failure of its assumption that most network connections are available. In this article, a geographic routing protocol is proposed for MANETs in delay tolerant situations, by using no more than one-hop information. A utility function is designed for implementing the under-controlled replication strategy. To reduce the overheads caused by message flooding, we employ a criterion so as to evaluate the degree of message redundancy. Consequently a message redundancy coping mechanism is added to our routing protocol. Extensive simulations have been conducted and the results show that when node moving speed is relatively low, our routing protocol outperforms the other schemes such as Epidemic, Spray and Wait, FirstContact in delivery ratio and average hop count, while introducing an acceptable overhead ratio into the network.
Until currently, a set of congestion control mechanisms have been proposed in Deterministic DTNs, which is mainly implemented in the network with limited mobility or the static network with scheduled disruption interval. @cite_16 propose an active congestion control based routing algorithm that pushes the selected message before the congestion happens. @cite_14 propose a novel node-based replication management algorithm which addresses buffer congestion by dynamically limiting the replication a node performs during each encounter. @cite_13 use information about queue backlogs, random walk and data packet scheduling nodes to make packet routing and forwarding decisions without the notion of end-to-end routes. Furthermore, @cite_2 proposes a two-level Back-Pressure with Source-Routing algorithm (BP+SR), which reduced the number of queues required at each node and reduced the size of the queues, thereby reducing the end-to-end delay.
{ "cite_N": [ "@cite_14", "@cite_16", "@cite_13", "@cite_2" ], "mid": [ "2099567938", "2150346466", "2136902384", "2170136598" ], "abstract": [ "The widespread availability of mobile wireless devices offers growing opportunities for the formation of temporary networks with only intermittent connectivity. These intermittently-connected networks (ICNs) typically lack stable end-to-end paths. In order to improve the delivery rates of the networks, new store-carry-and-forward protocols have been proposed which often use message replication as a forwarding mechanism. Message replication is effective at improving delivery, but given the limited resources of ICN nodes, such as buffer space, bandwidth and energy, as well as the highly dynamic nature of these networks, replication can easily overwhelm node resources. In this work we propose a novel node-based replication management algorithm which addresses buffer congestion by dynamically limiting the replication a node performs during each encounter. The insight for our algorithm comes from a stochastic model of message delivery in ICNs with constrained buffer space. We show through simulation that our algorithm is effective, nearly tripling delivery rates in some scenarios, and imposes little overhead.", "Opportunistic Networks (ONs) utilize the communication opportunity with a hop-by-hop behavior, and implement communication between encountered nodes based on the Store-and-Forward routing pattern. This approach, which is totally different from the traditional communication model, has received extensive interests from academic community. We consider the ONs are a type of Delay Tolerant Networks (DTNs) since their routing behavior are quite same regardless of the bundle layer protocol. Until currently, a set of congestion control mechanisms have been proposed in Deterministic DTNs, which is mainly implemented in the network with limited mobility or the static network with scheduled disruption interval. However, regarding the networks with large topology variation, known as Opportunistic DTNs, to design a congestion control mechanism is difficult. In this paper, we propose an active congestion control based routing algorithm that pushes the selected message before the congestion happens. In order to predict the future congestion situation, a corresponding estimation function is designed and our proposed algorithm works based on two asynchronous routing functions, which are scheduled according to the decision of estimation function. Simulation results show our proposed algorithm efficiently utilizes the distributed storage to achieve a quite low overhead ratio and also performs well in the realistic scenario.", "In this paper we consider an alternative, highly agile In this paper we consider an alternative, highly agile approach called backpressure routing for Delay Tolerant Networks (DTN), in which routing and forwarding decisions are made on a per-packet basis. Using information about queue backlogs, random walk and data packet scheduling nodes can make packet routing and forwarding decisions without the notion of end-to-end routes. To the best of our knowledge, this is the first ever implementation of dynamic backpressure routing in DTNs. Simulation results show that the proposed approach has advantages in terms of DTN networks.", "We study a mobile wireless network where groups or clusters of nodes are intermittently connected via mobile carriers'' (the carriers provide connectivity over time among different clusters of nodes). Over such networks (an instantiation of a delay tolerant network), it is well-known that traditional routing algorithms perform very poorly. In this paper, we propose a two-level Back- Pressure with Source-Routing algorithm (BP+SR) for such networks. The proposed BP+SR algorithm separates routing and scheduling within clusters (fast time-scale) from the communications that occur across clusters (slow time-scale), without loss in network throughput (i.e., BP+SR is throughput-optimal). More importantly, for a source and destination node that lie in different clusters, the traditional back-pressure algorithm results in large queue lengths at each node along its path. This is because the queue dynamics are driven by the slowest time-scale (i.e., that of the carrier nodes) along the path between the source and destination, which results in very large end-to-end delays. On the other-hand, we show that the two-level BP+SR algorithm maintains large queues only at a very few nodes, and thus results in order-wise smaller end-to-end delays. We provide analytical as well as simulation results to confirm our claims." ] }
1602.08194
2949778006
Current deep learning architectures are growing larger in order to learn from complex datasets. These architectures require giant matrix multiplication operations to train millions of parameters. Conversely, there is another growing trend to bring deep learning to low-power, embedded devices. The matrix operations, associated with both training and testing of deep networks, are very expensive from a computational and energy standpoint. We present a novel hashing based technique to drastically reduce the amount of computation needed to train and test deep networks. Our approach combines recent ideas from adaptive dropouts and randomized hashing for maximum inner product search to select the nodes with the highest activation efficiently. Our new algorithm for deep learning reduces the overall computational cost of forward and back-propagation by operating on significantly fewer (sparse) nodes. As a consequence, our algorithm uses only 5 of the total multiplications, while keeping on average within 1 of the accuracy of the original model. A unique property of the proposed hashing based back-propagation is that the updates are always sparse. Due to the sparse gradient updates, our algorithm is ideally suited for asynchronous and parallel training leading to near linear speedup with increasing number of cores. We demonstrate the scalability and sustainability (energy efficiency) of our proposed algorithm via rigorous experimental evaluations on several real datasets.
@cite_29 uses structured matrix transformations with low-rank matrices to reduce the number of parameters for the fully-connected layers of a neural network. This low-rank constraint leads to a smaller memory footprint. However, such an approximation is not well suited for asynchronous and parallel training limiting its scalability. We instead use random but sparse activations, leveraging database advances in approximate query processing, because they can be easily parallelized. [See for more details]
{ "cite_N": [ "@cite_29" ], "mid": [ "2949964376" ], "abstract": [ "We consider the task of building compact deep learning pipelines suitable for deployment on storage and power constrained mobile devices. We propose a unified framework to learn a broad family of structured parameter matrices that are characterized by the notion of low displacement rank. Our structured transforms admit fast function and gradient evaluation, and span a rich range of parameter sharing configurations whose statistical modeling capacity can be explicitly tuned along a continuum from structured to unstructured. Experimental results show that these transforms can significantly accelerate inference and forward backward passes during training, and offer superior accuracy-compactness-speed tradeoffs in comparison to a number of existing techniques. In keyword spotting applications in mobile speech recognition, our methods are much more effective than standard linear low-rank bottleneck layers and nearly retain the performance of state of the art models, while providing more than 3.5-fold compression." ] }
1602.08141
2289534603
In recent years, consumer Unmanned Aerial Vehicles have become very popular, everyone can buy and fly a drone without previous experience, which raises concern in regards to regulations and public safety. In this paper, we present a novel approach towards enabling safe operation of such vehicles in urban areas. Our method uses geodetically accurate dataset images with Geographical Information System (GIS) data of road networks and buildings provided by Google Maps, to compute a weighted A* shortest path from start to end locations of a mission. Weights represent the potential risk of injuries for individuals in all categories of land-use, i.e. flying over buildings is considered safer than above roads. We enable safe UAV operation in regards to 1- land-use by computing a static global path dependent on environmental structures, and 2- avoiding flying over moving objects such as cars and pedestrians by dynamically optimizing the path locally during the flight. As all input sources are first geo-registered, pixels and GPS coordinates are equivalent, it therefore allows us to generate an automated and user-friendly mission with GPS waypoints readable by consumer drones' autopilots. We simulated 54 missions and show significant improvement in maximizing UAV's standoff distance to moving objects with a quantified safety parameter over 40 times better than the naive straight line navigation.
The most popular trend in UAV video analysis has been moving object detection and tracking from aerial images, many approaches have been proposed with or without using GIS data and geo-registration steps. Kimura @cite_16 use epipolar constraint and flow vector bound to detect moving objects, Teutsch @cite_1 employ explicit segmentation of images, Xiao @cite_7 restrain the search on the road network, and Lin @cite_14 use a motion model in geo-coordinates. Moving object detection and tracking are mainly used to follow targets, for surveillance as Quigley @cite_10 and Rafi @cite_5 describe with their flight path adaptation solutions, or for consumer applications at very low-altitude as in @cite_12 and @cite_3 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_1", "@cite_3", "@cite_5", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2114527206", "", "2160410228", "2020139029", "2572910538", "", "", "2139241879" ], "abstract": [ "We present a system to detect and track moving objects from an airborne platform. Given a global map, such as a satellite image, our approach can locate and track the targets in geo-coordinates, namely longitude and latitude obtained from geo-registration. A motion model in geo-coordinates is more physically meaningful than the one in image coordinates. We propose to use a two-step geo-registration approach to stitch images acquired by satellite and UAV cameras. Mutual information is used to find correspondences between these two very different modalities. After motion segmentation and geo-registration, tracking is performed in a hierarchical manner: at the temporally local level, moving image blobs extracted by motion segmentation are associated into tracklets; at the global level, tracklets are linked by their appearance and spatio-temporal consistency on the global map. To achieve efficient time performance, graphics processing unit techniques are applied in the geo-registration and motion detection modules, which are the bottleneck of the whole system. Experiments show that our method can efficiently deal with long term occlusion and segmented tracks even when targets fall out the field of view.", "", "Moving objects play a key role for gaining scene understanding in aerial surveillance tasks. The detection of moving vehicles can be challenging due to high object distance, simultaneous object and camera motion, shadows, or weak contrast. In scenarios where vehicles are driving on busy urban streets, this is even more challenging due to possible merged detections. In this paper, a video processing chain is proposed for moving vehicle detection and segmentation. The fundament for detecting motion which is independent of the camera motion is tracking of local image features such as Harris corners. Independently moving features are clustered. Since motion clusters are prone to merge similarly moving objects, we evaluate various object segmentation approaches based on contour extraction, blob extraction, or machine learning to handle such effects. We propose to use a local sliding window approach with Integral Channel Features (ICF) and AdaBoost classifier.", "We present a vision based control strategy for tracking and following objects using an Unmanned Aerial Vehicle. We have developed an image based visual servoing method that uses only a forward looking camera for tracking and following objects from a multi-rotor UAV, without any dependence on GPS systems. Our proposed method tracks a user specified object continuously while maintaining a fixed distance from the object and also simultaneously keeping it in the center of the image plane. The algorithm is validated using a Parrot AR Drone 2.0 in outdoor conditions while tracking and following people, occlusions and also fast moving objects; showing the robustness of the proposed systems against perturbations and illumination changes. Our experiments show that the system is able to track a great variety of objects present in suburban areas, among others: people, windows, AC machines, cars and plants.", "The invention relates to the analysis of defects in materials such as molten glass. According to the invention, the material passes by a monochrome beam the wavelength of which is below 3x10-6 m. The radiation is diffused by any defects present in the material. The analysis of the defects is conducted dependent on the position of the receiver detecting the diffused rays and on the shape of the signal received. The invention permits a continuous analysis of a flow of glass supplying a fiber-making machine.", "", "", "The motivation of this research is to show that visual based object tracking and following is reliable using a cheap GPS-denied multirotor platform such as the AR Drone 2.0. Our architecture allows the user to specify an object in the image that the robot has to follow from an approximate constant distance. At the current stage of our development, in the event of image tracking loss the system starts to hover and waits for the image tracking recovery or second detection, which requires the usage of odometry measurements for self stabilization. During the following task, our software utilizes the forward-facing camera images and part of the IMU data to calculate the references for the four on-board low-level control loops. To obtain a stronger wind disturbance rejection and an improved navigation performance, a yaw heading reference based on the IMU data is internally kept and updated by our control algorithm. We validate the architecture using an AR Drone 2.0 and the OpenTLD tracker in outdoor suburban areas. The experimental tests have shown robustness against wind perturbations, target occlusion and illumination changes, and the system's capability to track a great variety of objects present on suburban areas, for instance: walking or running people, windows, AC machines, static and moving cars and plants." ] }
1602.08141
2289534603
In recent years, consumer Unmanned Aerial Vehicles have become very popular, everyone can buy and fly a drone without previous experience, which raises concern in regards to regulations and public safety. In this paper, we present a novel approach towards enabling safe operation of such vehicles in urban areas. Our method uses geodetically accurate dataset images with Geographical Information System (GIS) data of road networks and buildings provided by Google Maps, to compute a weighted A* shortest path from start to end locations of a mission. Weights represent the potential risk of injuries for individuals in all categories of land-use, i.e. flying over buildings is considered safer than above roads. We enable safe UAV operation in regards to 1- land-use by computing a static global path dependent on environmental structures, and 2- avoiding flying over moving objects such as cars and pedestrians by dynamically optimizing the path locally during the flight. As all input sources are first geo-registered, pixels and GPS coordinates are equivalent, it therefore allows us to generate an automated and user-friendly mission with GPS waypoints readable by consumer drones' autopilots. We simulated 54 missions and show significant improvement in maximizing UAV's standoff distance to moving objects with a quantified safety parameter over 40 times better than the naive straight line navigation.
Another area that has been getting a lot of attention is autonomous navigation. Different subproblems have been studied, path planning in dynamic environment @cite_17 @cite_6 , GIS-assisted and vision-based localization using either road detection @cite_20 , buildings layout @cite_9 or DEM (Digital Elevation Map) @cite_8 . Various methods have been proposed for UAV navigation, using optical flow with @cite_2 or without DEM @cite_13 , or using inertial sensors @cite_18 . Obstacle avoidance is also a big concern for automating UAV operation, but research has mostly been focused on ground robots @cite_19 @cite_4 , even if there has been adaptations for UAVs as Israelsen 's intuitive solution for operators @cite_15 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_9", "@cite_6", "@cite_19", "@cite_2", "@cite_15", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2159786429", "", "1976972916", "1901872687", "", "2041600899", "", "2071361688", "", "2049376476", "2024694195" ], "abstract": [ "In this paper, we present our latest achievements towards the goal of autonomous flights of an MAV in unknown environments, only having a monocular camera as exteroceptive sensor. As MAVs are highly agile, it is not sufficient to directly use the visual input for position control at the framerates that can be achieved with small onboard computers. Our contributions in this work are twofold. First, we present a solution to overcome the issue of having a low frequent onboard visual pose update versus the high agility of an MAV. This is solved by filtering visual information with inputs from inertial sensors. Second, as our system is based on monocular vision, we present a solution to estimate the metric visual scale aid of an air pressure sensor. All computation is running onboard and is tightly integrated on the MAV to avoid jitter and latencies. This framework enables stable flights indoors and outdoors even under windy conditions.", "", "Wide area motion imagery sensors utilize multiple cameras on a single aerial platform to monitor very large geographic areas in real time. The images must be stabilized and georegistered before they can be combined with other geospatial datasets, but their wide fields of view and oblique viewing angles make it difficult to align them accurately with a geographic reference frame. We describe a georegistration algorithm that accepts a digital elevation model as a geographic reference from which it generates predicted images. Registration of these predicted images produces 3D-to-2D tie points that determine the motion imagery camera models. We present results on multi-camera motion imagery from the U. S. Air Force's CLIF 2006 dataset using a bare-earth U. S. Geological Survey digital elevation model. The algorithm accurately georegisters the imagery despite the lack of buildings and trees in the model. Because of the wide availability of digital elevation models, the algorithm provides a practical means of georegistration.", "In recent years, oblique aerial images of urban regions have become increasingly popular for 3D city modeling, texturing, and various cadastral applications. In contrast to images taken vertically to the ground, they provide information on building heights, appearance of facades, and terrain elevation. Despite their widespread availability for many cities, the processing pipeline for oblique images is not fully automatic yet. Especially the process of precisely registering oblique images with map vector data can be a tedious manual process. We address this problem with a registration approach for oblique aerial images that is fully automatic and robust against discrepancies between map and image data. As input, it merely requires a cadastral map and an arbitrary number of oblique images. Besides rough initial registrations usually available from GPS INS measurements, no further information is required, in particular no information about the terrain elevation.", "", "We address the problem of vision-based navigation in busy inner-city locations, using a stereo rig mounted on a mobile platform. In this scenario semantic information becomes important: rather than modeling moving objects as arbitrary obstacles, they should be categorized and tracked in order to predict their future behavior. To this end, we combine classical geometric world mapping with object category detection and tracking. Object-category-specific detectors serve to find instances of the most important object classes (in our case pedestrians and cars). Based on these detections, multi-object tracking recovers the objects’ trajectories, thereby making it possible to predict their future locations, and to employ dynamic path planning. The approach is evaluated on challenging, realistic video sequences recorded at busy inner-city locations.", "", "In this paper we present an approach that aids the human operator of unmanned aerial vehicles by automatically performing collision avoidance with obstacles in the environment so that the operator can focus on the global direction of motion of the vehicle. As opposed to systems that override operator control as a last resort in order to avoid collisions (such as those found in modern automobiles), our approach is designed such that the operator can rely on the automatic collision avoidance, enabling intuitive and safe operator control of vehicles that may otherwise be difficult to control. Our approach continually extrapolates the future flight path of the vehicle given the current operator control input. If an imminent collision is predicted our algorithm will override the operator’s control input with the nearest control input that will actually let the vehicle avoid collisions with obstacles. This ensures safe flight while simultaneously maintaining the intent of the human operator as closely as possible. We successfully implemented our approach on a physical quadrotor system in a laboratory environment. In all experiments the human operator failed to crash the vehicle into floors, walls, ceilings, or obstacles, even when deliberately attempting to do so.", "", "Modern airborne navigation systems for manned and unmanned platforms usually rely on GPS measurements to constrain the inertial position estimate of the platform. This reliance on GPS can quickly cause the navigational estimates of system to become unreliable when the system is operating in GPS-limited or denied areas. This paper presents a vision-aided inertial navigation system that uses ground features (in this case road intersections) matched to a database to provide position measurements. An image processing algorithm is used to extract the shapes of the road intersections from visual imagery, this shape is then matched to a reference database to provide image to map road intersection correspondences. The correspondence information is fused with the inertial solution in an Extended Kalman Filter to constrain the complete attitude and position inertial navigation solution. The system is developed to operate at non-zero attitude angles removing the level flight limitations of past approaches. Flight test results of the system demonstrate that the system can successfully produce accurate navigation estimates that are comparable to the use of GPS without the same limitations of GPS when operating in a GPS-limited or denied area.", "Path planning in dynamic environments with moving obstacles is computationally complex since it requires modeling time as an additional dimension. While in other domains there are state dominance relationships that can significantly reduce the complexity of the search, in dynamic environments such relationships do not exist. This paper presents a novel state dominance relationship tailored specifically for dynamic environments, and presents a planner that uses that property to plan paths over ten times faster than without using state dominance." ] }
1602.07679
2951523524
We describe a minimally-supervised method for computing a statistical shape space model of the palate surface. The model is created from a corpus of volumetric magnetic resonance imaging (MRI) scans collected from 12 speakers. We extract a 3D mesh of the palate from each speaker, then train the model using principal component analysis (PCA). The palate model is then tested using 3D MRI from another corpus and evaluated using a high-resolution optical scan. We find that the error is low even when only a handful of measured coordinates are available. In both cases, our approach yields promising results. It can be applied to extract the palate shape from MRI data, and could be useful to other analysis modalities, such as electromagnetic articulography (EMA) and ultrasound tongue imaging (UTI).
@cite_11 used realtime mri to investigate the morphological variation of the palate and the posterior pharyngeal wall. They extracted the shape information from mid-sagittal slices of the vocal tract. Afterwards, they applied a pca to the obtained data to extract the principal modes of variation of both structures. In their study, they found that the obtained principal modes could actually be related to anatomical variation, such as the degree of concavity of the palate. However, this study was restricted to the 2D case.
{ "cite_N": [ "@cite_11" ], "mid": [ "2132244055" ], "abstract": [ "Purpose Adult human vocal tracts display considerable morphological variation across individuals, but the nature and extent of this variation has not been extensively studied for many vocal tract s..." ] }
1602.07844
2949084636
In regularized risk minimization, the associated optimization problem becomes particularly difficult when both the loss and regularizer are nonsmooth. Existing approaches either have slow or unclear convergence properties, are restricted to limited problem subclasses, or require careful setting of a smoothing parameter. In this paper, we propose a continuation algorithm that is applicable to a large class of nonsmooth regularized risk minimization problems, can be flexibly used with a number of existing solvers for the underlying smoothed subproblem, and with convergence results on the whole algorithm rather than just one of its subproblems. In particular, when accelerated solvers are used, the proposed algorithm achieves the fastest known rates of @math on strongly convex problems, and @math on general convex problems. Experiments on nonsmooth classification and regression tasks demonstrate that the proposed algorithm outperforms the state-of-the-art.
For example, consider the hinge loss @math , where @math is the linear model parameter, and @math is the @math th training sample with @math . Using @math , @math can be smoothed to @cite_12 .. Similarly, the @math loss @math can be smoothed to .. Other examples in machine learning include popular regularizers such as the @math , total variation @cite_1 , overlapping group lasso, and graph-guided fused lasso @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_12" ], "mid": [ "1989060270", "2023722580", "1807994917" ], "abstract": [ "We study the problem of estimating high-dimensional regression models regularized by a structured sparsity-inducing penalty that encodes prior structural information on either the input or output variables. We consider two widely adopted types of penalties of this kind as motivating examples: (1) the general overlapping-group-lasso penalty, generalized from the group-lasso penalty; and (2) the graph-guided-fused-lasso penalty, generalized from the fused-lasso penalty. For both types of penalties, due to their nonseparability and nonsmoothness, developing an efficient optimization method remains a challenging problem. In this paper we propose a general optimization approach, the smoothing proximal gradient (SPG) method, which can solve structured sparse regression problems with any smooth convex loss under a wide spectrum of structured sparsity-inducing penalties. Our approach combines a smoothing technique with an effective proximal gradient method. It achieves a convergence rate significantly faster than the standard first-order methods, subgradient methods, and is much more scalable than the most widely used interior-point methods. The efficiency and scalability of our method are demonstrated on both simulation experiments and real genetic data sets.", "Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. This paper applies a smoothing technique and an accelerated first-order algorithm, both from Nesterov [Math. Program. Ser. A, 103 (2005), pp. 127-152], and demonstrates that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems as (1) it is computationally efficient, (2) it is accurate and returns solutions with several correct digits, (3) it is flexible and amenable to many kinds of reconstruction problems, and (4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization and convex programs seeking to minimize the @math norm of @math under constraints, in which @math is not diagonal. The code is available online as a free package in the MATLAB language.", "In this work we consider the stochastic minimization of nonsmooth convex loss functions, a central problem in machine learning. We propose a novel algorithm called Accelerated Nonsmooth Stochastic Gradient Descent (ANSGD), which exploits the structure of common nonsmooth loss functions to achieve optimal convergence rates for a class of problems including SVMs. It is the first stochastic algorithm that can achieve the optimal O(1 t) rate for minimizing nonsmooth loss functions (with strong convexity). The fast rates are confirmed by empirical comparisons, in which ANSGD significantly outperforms previous subgradient descent algorithms including SGD." ] }
1602.07844
2949084636
In regularized risk minimization, the associated optimization problem becomes particularly difficult when both the loss and regularizer are nonsmooth. Existing approaches either have slow or unclear convergence properties, are restricted to limited problem subclasses, or require careful setting of a smoothing parameter. In this paper, we propose a continuation algorithm that is applicable to a large class of nonsmooth regularized risk minimization problems, can be flexibly used with a number of existing solvers for the underlying smoothed subproblem, and with convergence results on the whole algorithm rather than just one of its subproblems. In particular, when accelerated solvers are used, the proposed algorithm achieves the fastest known rates of @math on strongly convex problems, and @math on general convex problems. Experiments on nonsmooth classification and regression tasks demonstrate that the proposed algorithm outperforms the state-of-the-art.
Minimization of the smooth (and convex) @math can be performed efficiently using first-order methods, including the so-called optimal method'' and its variants @cite_8 that achieve the optimal convergence rate.
{ "cite_N": [ "@cite_8" ], "mid": [ "2167732364" ], "abstract": [ "In this paper we propose a new approach for constructing efficient schemes for non-smooth convex optimization. It is based on a special smoothing technique, which can be applied to functions with explicit max-structure. Our approach can be considered as an alternative to black-box minimization. From the viewpoint of efficiency estimates, we manage to improve the traditional bounds on the number of iterations of the gradient schemes from ** keeping basically the complexity of each iteration unchanged." ] }
1602.07628
2279770870
A fundamental question about a market is under what conditions, and then how rapidly, does price signaling cause price equilibration. Qualitatively, this ought to depend on how well-connected the market is. We address this question quantitatively for a certain class of Arrow-Debreu markets with continuous-time proportional t ^ a tonnement dynamics. We show that the algebraic connectivity of the market determines the effectiveness of price signaling equilibration. This also lets us study the rate of external noise that a market can tolerate and still maintain near-equilibrium prices.
In a different vein, researchers have been interested in other market structure effects: as these works do not directly impact ours, we do not attempt a survey, but only provide a few pointers: @cite_1 looks at markets in which buyer-seller pairs can trade only along established links, at the incentives to form such links, and at the efficiency of trade in such networks; @cite_36 considers Arrow-Debreu markets in which, again, direct trade can occur only along established links, thus enabling the same commodity to have different prices in different places; and @cite_43 looks at markets in which buyer-seller pairs can only interact through intermediary traders, and studies the power of these traders and how equilibrium prices are affected by the connectivity structure.
{ "cite_N": [ "@cite_36", "@cite_43", "@cite_1" ], "mid": [ "1975394562", "1978881802", "" ], "abstract": [ "This paper introduces a new model of exchange: networks, rather than markets, of buyers and sellers. It begins with the empirically motivated premise that a buyer and seller must have a relationship, a \"link,\" to exchange goods. Networks--buyers, sellers, and the pattern of links connecting them--are common exchange environments. This paper develops a methodology to study network structures and explains why agents may form networks. In a model that captures characteristics of a variety of industries, the paper shows that buyers and sellers, acting strategically in their own self-interests, can form the network structures that maximize overall welfare.", "In a wide range of markets, individual buyers and sellers trade through intermediaries, who determine prices via strategic considerations. Typically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market. We model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders. We show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient allocation of goods. Finally, we analyze trader profits in terms of the graph structure -- roughly, a trader can command a positive profit if and only if it has an \"essential\" connection in the network, thus providing a graph-theoretic basis for quantifying the amount of competition among traders.", "" ] }
1602.07943
2281810379
In this paper, we adopt the relay selection (RS) protocol proposed by Bletsas, Khisti, Reed and Lippman (2006) with Enhanced Dynamic Decode-and-Forward (EDDF) and network coding (NC) system in a two-hop two-way multi-relay network. All nodes are single-input single-output (SISO) and half-duplex, i.e., they cannot transmit and receive data simultaneously. The outage probability is analyzed and we show comparisons of outage probability with various scenarios under Rayleigh fading channel. Our results show that the relay selection with EDDF and network coding (RS-EDDF&NC) scheme has the best performance in the sense of outage probability upon the considered decode-and-forward (DF) relaying if there exist sufficiently relays. In addition, the performance loss is large if we select a relay at random. This shows the importance of relay selection strategies.
In @cite_3 , the authors proposed an opportunistic relaying protocol in which relays overhear the request-to-send (RTS) and clear-to-send (CTS) packets exchanged between the source and the destination before the data transmission. Then the optimal relay is selected by the selection criterion according to the quality of source-relay and relay-destination links. This protocol can be easily implemented in a decentralized way. It was proved that with the same intermediate relays, the DF based on this protocol achieves the same DMT of the DSTC scheme in @cite_15 . In @cite_0 , the authors further considered reactive and proactive relay selection schemes. The authors proved that both reactive and proactive opportunistic DF relaying are outage-optimal. Moreover, the proactive strategy is much efficient since it can be viewed as energy-efficient routing in the network and reduces the overhead of the system. In @cite_7 , the authors proposed two schemes: single relay selection with network coding (S-RS-NC) and dual relay selection with network coding (D-RS-NC), and show that the D-RS-NC scheme outperforms other considered RS schemes in two-way relay channels. More related studies about relay selection are listed in @cite_5 -- @cite_6 .
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_6", "@cite_0", "@cite_5", "@cite_15" ], "mid": [ "2126585321", "2128295617", "2103366363", "", "2131192213", "2026898705" ], "abstract": [ "In this paper, we consider the design of joint network coding (NC) and relay selection (RS) in two-way relay channels. In the proposed schemes, two users first sequentially broadcast their respective information to all the relays. We propose two RS schemes: 1) a single RS with NC and 2) a dual RS with NC. For both schemes, the selected relays perform NC on the received signals sent from the two users and forward them to both users. The proposed schemes are analyzed, and the exact bit-error-rate (BER) expressions are derived and verified through Monte Carlo simulations. It is shown that the dual RS with NC outperforms other considered RS schemes in two-way relay channels. The results also reveal that the proposed RS-NC schemes provide a selection gain compared with an NC scheme with no RS and an NC gain relative to a conventional RS scheme with no NC.", "Cooperative diversity has been recently proposed as a way to form virtual antenna arrays that provide dramatic gains in slow fading wireless environments. However, most of the proposed solutions require distributed space-time coding algorithms, the careful design of which is left for future investigation if there is more than one cooperative relay. We propose a novel scheme that alleviates these problems and provides diversity gains on the order of the number of relays in the network. Our scheme first selects the best relay from a set of M available relays and then uses this \"best\" relay for cooperation between the source and the destination. We develop and analyze a distributed method to select the best relay that requires no topology information and is based on local measurements of the instantaneous channel conditions. This method also requires no explicit communication among the relays. The success (or failure) to select the best available path depends on the statistics of the wireless channel, and a methodology to evaluate performance for any kind of wireless channel statistics, is provided. Information theoretic analysis of outage probability shows that our scheme achieves the same diversity-multiplexing tradeoff as achieved by more complex protocols, where coordination and distributed space-time coding for M relay nodes is required, such as those proposed by Laneman and Wornell (2003). The simplicity of the technique allows for immediate implementation in existing radio hardware and its adoption could provide for improved flexibility, reliability, and efficiency in future 4G wireless systems.", "This paper is on relay selection schemes for wireless relay networks. First, we derive the diversity of many single-relay selection schemes in the literature. Then, we generalize the idea of relay selection by allowing more than one relay to cooperate. The SNR-optimal multiple relay selection scheme can be achieved by exhaustive search, whose complexity increases exponentially in the network size. To reduce the complexity, several SNR-suboptimal multiple relay selection schemes are proposed, whose complexity is linear in the number of relays. They are proved to achieve full diversity. Simulation shows that they perform much better than the corresponding single relay selection methods and very close to the SNR-optimal multiple relay selection scheme. In addition, for large networks, these multiple relay selection schemes require the same amount of feedback bits from the receiver as single relay selection schemes.", "", "Practical cooperative diversity protocols often rely on low-cost radios that treat multiple in-band signals as noise and thus require strictly orthogonal transmissions. We analyze the performance of a class of opportunistic relaying protocols that employ simple packet level feedback and strictly orthogonal transmissions. It is shown that the diversity-multiplexing tradeoff of the proposed protocols either matches or outperforms the multi-input-single-output (MISO), zero-feedback performance. These gains indicate that low complexity radios and feedback could be an appealing architecture for future user cooperation protocols.", "We develop and analyze space-time coded cooperative diversity protocols for combating multipath fading across multiple protocol layers in a wireless network. The protocols exploit spatial diversity available among a collection of distributed terminals that relay messages for one another in such a manner that the destination terminal can average the fading, even though it is unknown a priori which terminals will be involved. In particular, a source initiates transmission to its destination, and many relays potentially receive the transmission. Those terminals that can fully decode the transmission utilize a space-time code to cooperatively relay to the destination. We demonstrate that these protocols achieve full spatial diversity in the number of cooperating terminals, not just the number of decoding relays, and can be used effectively for higher spectral efficiencies than repetition-based schemes. We discuss issues related to space-time code design for these protocols, emphasizing codes that readily allow for appealing distributed versions." ] }
1602.07630
2287581523
The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees. However, the available S-DCA formulation is limited to finite sample sizes and relies on performing multiple passes over the same data. This formulation is not well-suited for online implementations where data keep streaming in. In this work, we develop an online dual coordinate-ascent (O-DCA) algorithm that is able to respond to streaming data and does not need to revisit the past data. This feature embeds the resulting construction with continuous adaptation, learning, and tracking abilities, which are particularly attractive for online learning scenarios.
An alternative approach is to solve problem in the dual domain. Instead of minimizing directly, one can maximize the dual cost function using a coordinate-ascent algorithm @cite_9 . The dual problem involves maximizing over @math dual variables. Since updating all @math dual variables at each iteration can be costly, the coordinate-ascent implementation updates one dual variable at a time. There have been several recent investigations along these lines in the literature with encouraging results. For example, references @cite_11 @cite_4 observed that a dual coordinate-ascent (DCA, for short) method can outperform the SGD algorithm when applied to large-scale SVM. Later, a stochastic version of DCA (denoted by S-DCA) was examined in @cite_1 @cite_8 for more general risk functions. Compared with DCA, at each iteration, the stochastic implementation picks one data sample randomly (not cyclically) and updates the corresponding coordinate. Reference @cite_1 showed that S-DCA converges exponentially to the exact minimizer @math by running repeated passes over the finite data sample, which is a notable advantage over SGD.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_1", "@cite_11" ], "mid": [ "2108712612", "2950080435", "2013850411", "1939652453", "2165966284" ], "abstract": [ "Most optimization methods for logistic regression or maximum entropy solve the primal problem. They range from iterative scaling, coordinate descent, quasi-Newton, and truncated Newton. Less efforts have been made to solve the dual problem. In contrast, for linear support vector machines (SVM), methods have been shown to be very effective for solving the dual problem. In this paper, we apply coordinate descent methods to solve the dual form of logistic regression and maximum entropy. Interestingly, many details are different from the situation in linear SVM. We carefully study the theoretical convergence as well as numerical issues. The proposed method is shown to be faster than most state of the art methods for training logistic regression and maximum entropy.", "We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.", "The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent.", "Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.", "In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-loss functions. The proposed method is simple and reaches an e-accurate solution in O(log(1 e)) iterations. Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, TRON, SVMperf, and a recent primal coordinate descent implementation." ] }