aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1403.3710
|
1969281448
|
This article proposes a novel energy-efficient multimedia delivery system called EStreamer. First, we study the relationship between buffer size at the client, burst-shaped TCP-based multimedia traffic, and energy consumption of wireless network interfaces in smartphones. Based on the study, we design and implement EStreamer for constant bit rate and rate-adaptive streaming. EStreamer can improve battery lifetime by 3x, 1.5x, and 2x while streaming over Wi-Fi, 3G, and 4G, respectively.
|
Other approaches identify idle periods at different phases of TCP-based applications, such as in the middle of the data transmission @cite_19 @cite_17 . Another example is choking unchoking the TCP receive window size to make the TCP traffic bursty @cite_6 . In this case, the burst interval is the duration between a choking and unchoking period. Authors in @cite_17 applied this trick for multimedia streaming services such as RealNetwork, Windows Media and YouTube with a burst interval of 200 ms. However, these mechanisms cannot be applied for 3G and 4G, because these solutions are wireless access technology dependent as they force the Wi-Fi interface into sleep state and such an operation on cellular network interfaces would bar the smartphone from basic phone functioning. In contrast, EStreamer is independent of the WNI being used for streaming. Recently, limm2012 proposed GreenTube to save communication energy for YouTube using multiple TCP connections. However, such approaches cannot be successful in reducing energy consumption when the application receives content at the server controlled lower rate @cite_11 .
|
{
"cite_N": [
"@cite_19",
"@cite_11",
"@cite_6",
"@cite_17"
],
"mid": [
"2044993311",
"2169211242",
"2144246916",
"2045258279"
],
"abstract": [
"Wireless interfaces are major power consumers on mobile systems. Considerable research has improved the energy efficiency of elongated idle periods or created more elongated idle periods in wireless interfaces, often requiring cooperation from applications or the network infrastructure. With increasing wireless mobile data, it has become critical to improve the energy efficiency of active wireless interfaces. In this work, we present micro power management (μPM), a solution inspired by the mismatch between the high performance of state-of-the-art 802.11 interfaces and the modest data rate requirements by many popular network applications. μPM enables an 802.11 interface to enter unreachable power-saving modes even between MAC frames, without noticeable impact on the traffic flow. To control data loss, μPM leverages the retransmission mechanism in 802.11 and controls frame delay to adapt to demanded network throughput with minimal cooperation from the access point. Based on a theoretical framework, we employ simulation to systematically investigate an effective and efficient implementation of μPM. We have built a prototype μPM on an open-access wireless hardware platform. Measurements show that more than 30 power reduction for the wireless transceiver can be achieved with μPM for various applications without perceptible quality degradation.",
"Multimedia streaming applications are among the most energy hungry applications in smartphones. The energy consumption mostly depends on the delivery techniques and on the power management techniques of wireless interfaces (Wi-Fi and 3G). In order to provide insights on what kind of streaming techniques exist, how they work on different mobile platforms, and what is their impact on the energy consumption of mobile phones, we have done a large set of active measurements with several smartphones having both Wi-Fi and cellular network access. Our analysis reveals five different techniques to deliver the content to the video players. The selection of a technique depends on the device, player, quality, and service. The results from our power measurements allow us to conclude that none of the identified techniques is optimal because they take none of the following facts into account: access technology used, user behaviour, and user preferences concerning data waste. However, we point out the techniques that provide the most attractive trade-offs in particular situations. Furthermore, we make several observations on the energy consumption of different players, containers, and video qualities that should be taken into consideration when optimizing the energy consumption.",
"In mobile devices, the wireless network interface card (WNIC) consumes a significant portion of overall system energy. One way to reduce energy consumed by a device is to transition its WNIC to a lower-power steep mode when data is not being received or transmitted. In this paper, we investigate client-centered techniques for energy efficient communication, using IEEE 802.11b, within the network layer. The basic idea is to conserve energy by keeping the WNIC in high-power mode only when necessary. We track each connection, which allows us to determine inactive intervals during which to transition the WNIC to sleep mode. Whenever necessary, we also shape the traffic from the client side to maximize sleep intervals-convincing the server to send data in bursts. This trades lower WNIC energy consumption for an increase in transmission time. Our techniques are compatible with standard TCP and do not rely on any assistance from the server or network infrastructure. Results show that during Web browsing, our client-centered technique saved 21 percent energy compared to PSM and incurred less than a 1 percent increase in transmission time compared to regular TCP. For a large file download, our scheme saved 27 percent energy on average with a transmission time increase of only 20 percent",
"While the 802.11 power saving mode (PSM) and its enhancements can reduce power consumption by putting the wireless network interface (WNI) into sleep as much as possible, they either require additional infrastructure support, or may degrade the transmission throughput and cause additional transmission delay. These schemes are not suitable for long and bulk data transmissions with strict QoS requirements on wireless devices. With increasingly abundant bandwidth available on the Internet, we have observed that TCP congestion control is often not a constraint of bulk data transmissions as bandwidth throttling is widely used in practice. In this paper, instead of further manipulating the trade-off between the power saving and the incurred delay, we effectively explore the power saving potential by considering the bandwidth throttling on streaming downloading servers. We propose an application-independent protocol, called PSM-throttling. With a quick detection on the TCP flow throughput, a client can identify bandwidth throttling connections with a low cost Since the throttling enables us to reshape the TCP traffic into periodic bursts with the same average throughput as the server transmission rate, the client can accurately predict the arriving time of packets and turn on off the WNI accordingly. PSM-throttling can minimize power consumption on TCP-based bulk traffic by effectively utilizing available Internet bandwidth without degrading the application's performance perceived by the user. Furthermore, PSM-throttling is client-centric, and does not need any additional infrastructure support. Our lab-environment and Internet-based evaluation results show that PSM-throttling can effectively improve energy savings (by up to 75 ) and or the QoS for a broad types of TCP-based applications, including streaming, pseudo streaming, and large file downloading, over existing PSM-like methods."
]
}
|
1403.3710
|
1969281448
|
This article proposes a novel energy-efficient multimedia delivery system called EStreamer. First, we study the relationship between buffer size at the client, burst-shaped TCP-based multimedia traffic, and energy consumption of wireless network interfaces in smartphones. Based on the study, we design and implement EStreamer for constant bit rate and rate-adaptive streaming. EStreamer can improve battery lifetime by 3x, 1.5x, and 2x while streaming over Wi-Fi, 3G, and 4G, respectively.
|
Many papers have also studied the energy efficiency of 3G communication. xiao08youtube were among the first to study the energy consumption of YouTube streaming over both Wi-Fi and 3G. We go much beyond the scope of that work by considering the impact of traffic shaping and different network configurations, including LTE. Afterwards, balasubramanian09imc3g performed a measurement study on the energy consumption of 3G communication but did not consider streaming applications in their study. qian11mobisys characterized the energy efficiency of several different applications. They observed that some music streaming applications behave in an energy inefficient manner due to the CBR traffic. Earlier in @cite_7 , the same authors proposed a traffic shaping scheme for YouTube and compute estimates on potential energy savings with that scheme. However, they did not consider the consequence of TCP flow control on the energy consumption of WNIs in their study. A more recent and thorough measurement study on different mobile video streaming services and the resulting energy consumption on different mobile OSs and devices is presented in @cite_11 .
|
{
"cite_N": [
"@cite_7",
"@cite_11"
],
"mid": [
"2144190599",
"2169211242"
],
"abstract": [
"3G cellular data networks have recently witnessed explosive growth. In this work, we focus on UMTS, one of the most popular 3G mobile communication technologies. Our work is the first to accurately infer, for any UMTS network, the state machine (both transitions and timer values) that guides the radio resource allocation policy through a light-weight probing scheme. We systematically characterize the impact of operational state machine settings by analyzing traces collected from a commercial UMTS network, and pinpoint the inefficiencies caused by the interplay between smartphone applications and the state machine behavior. Besides basic characterizations, we explore the optimal state machine settings in terms of several critical timer values evaluated using real network traces. Our findings suggest that the fundamental limitation of the current state machine design is its static nature of treating all traffic according to the same inactivity timers, making it difficult to balance tradeoffs among radio resource usage efficiency, network management overhead, device radio energy consumption, and performance. To the best of our knowledge, our work is the first empirical study that employs real cellular traces to investigate the optimality of UMTS state machine configurations. Our analysis also demonstrates that traffic patterns impose significant impact on radio resource and energy consumption. In particular, We propose a simple improvement that reduces YouTube streaming energy by 80 by leveraging an existing feature called fast dormancy supported by the 3GPP specifications.",
"Multimedia streaming applications are among the most energy hungry applications in smartphones. The energy consumption mostly depends on the delivery techniques and on the power management techniques of wireless interfaces (Wi-Fi and 3G). In order to provide insights on what kind of streaming techniques exist, how they work on different mobile platforms, and what is their impact on the energy consumption of mobile phones, we have done a large set of active measurements with several smartphones having both Wi-Fi and cellular network access. Our analysis reveals five different techniques to deliver the content to the video players. The selection of a technique depends on the device, player, quality, and service. The results from our power measurements allow us to conclude that none of the identified techniques is optimal because they take none of the following facts into account: access technology used, user behaviour, and user preferences concerning data waste. However, we point out the techniques that provide the most attractive trade-offs in particular situations. Furthermore, we make several observations on the energy consumption of different players, containers, and video qualities that should be taken into consideration when optimizing the energy consumption."
]
}
|
1403.3710
|
1969281448
|
This article proposes a novel energy-efficient multimedia delivery system called EStreamer. First, we study the relationship between buffer size at the client, burst-shaped TCP-based multimedia traffic, and energy consumption of wireless network interfaces in smartphones. Based on the study, we design and implement EStreamer for constant bit rate and rate-adaptive streaming. EStreamer can improve battery lifetime by 3x, 1.5x, and 2x while streaming over Wi-Fi, 3G, and 4G, respectively.
|
Concerning the RRC parameter configuration in cellular networks, lee2004wts first proposed to tune the inactivity timers dynamically. qian10imc suggested traffic aware inactivity timer configuration to reduce energy consumption. They also proposed to trigger Fast Dormancy based on the information provided by different applications @cite_12 . falaki2010imc proposed the total of T1+T2 = 4.5 s based on their observation on packet inter-arrival time in traffic traces. ukhanova also suggested aggressive timer configuration for CBR video transmission from mobile devices. Deng proposed to initiate FD dynamically instead of a fixed timeout value. In fact, with these proposed timer settings a mobile device would be able to save more energy in the presence of EStreamer. At the same time, these studies do not consider the consequence of the proposed configurations on the network and these are the cases where our study also can have significant implications. We considered the benefits and disadvantages of different network configurations for bursty streaming traffic, and thus the importance of our study also lies in careful configuration of the RRC parameters in the network and designing energy-aware network access.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2100210097"
],
"abstract": [
"In 3G cellular networks, the release of radio resources is controlled by inactivity timers. However, the timeout value itself, also known as the tail time, can last up to 15 seconds due to the necessity of trading off resource utilization efficiency for low management overhead and good stability, thus wasting considerable amount of radio resources and battery energy at user handsets. In this paper, we propose Tail Optimization Protocol (TOP), which enables cooperation between the phone and the radio access network to eliminate the tail whenever possible. Intuitively, applications can often accurately predict a long idle time. Therefore the phone can notify the cellular network on such an imminent tail, allowing the latter to immediately release radio resources. To realize TOP, we utilize a recent proposal of 3GPP specification called fast dormancy, a mechanism for a handset to notify the cellular network for immediate radio resource release. TOP thus requires no change to the cellular infrastructure and only minimal changes to smartphone applications. Our experimental results based on real traces show that with a reasonable prediction accuracy, TOP saves the overall radio energy (up to 17 ) and radio resources (up to 14 ) by reducing tail times by up to 60 . For applications such as multimedia streaming, TOP can achieve even more significant savings of radio energy (up to 60 ) and radio resources (up to 50 )."
]
}
|
1403.3710
|
1969281448
|
This article proposes a novel energy-efficient multimedia delivery system called EStreamer. First, we study the relationship between buffer size at the client, burst-shaped TCP-based multimedia traffic, and energy consumption of wireless network interfaces in smartphones. Based on the study, we design and implement EStreamer for constant bit rate and rate-adaptive streaming. EStreamer can improve battery lifetime by 3x, 1.5x, and 2x while streaming over Wi-Fi, 3G, and 4G, respectively.
|
Compared to our earlier work @cite_15 , our notable new contributions are the following. First we demonstrate the relationship between available buffer space at the client, the received burst size and the energy consumption. After that, we illustrate the implementation of EStreamer for both CBR and rate-adaptive streaming, and then demonstrate how EStreamer quickly fine-tunes the energy-optimal configuration for a given client using a binary search approach. @cite_13 , we analyzed only video streaming results, whereas in this work we include power and signaling measurement results both for audio and video streaming scenarios.
|
{
"cite_N": [
"@cite_15",
"@cite_13"
],
"mid": [
"2044762744",
"2046230564"
],
"abstract": [
"Shaping constant bit rate traffic into bursts has been proposed earlier for UDP-based multimedia streaming to save Wi-Fi communication energy of mobile devices. The relationship between the burst size and energy consumption of wireless interfaces is such that the larger is the burst size, the lower is the energy consumption per bit received as long as there is no packet loss. However, the relationship between the burst size and energy in case of TCP traffic has not yet been fully uncovered. In this paper, we develop a power consumption model which describes this relationship in wireless multimedia streaming scenarios. Then, we implement a cross-layer stream delivery system, EStreamer. This system relies on a heuristic derived from the model and on client playback buffer status to determine a burst size and provides as small energy consumption as possible without jeopardizing smooth playback. The heuristic greatly simplifies the deployment of EStreamer compared to most existing solutions by ensuring energy savings regardless of the wireless interface being used. We show that in the best cases using EStreamer reduces energy consumption of a mobile device by 65 , 50-60 and 35 while streaming over Wi-Fi, LTE and 3G respectively. Compared with existing energy-aware applications energy consumption can be reduced by 10-55 further.",
"Energy consumption of mobile devices is a great concern and streaming applications are among the most power hungry ones. We evaluate the energy saving potential of shaping streaming traffic into bursts before transmitting it over 3G and LTE networks to smartphones. The idea is that in between the bursts, the phone has sufficient time to switch from the high-power active state to low-power states. We investigate the impact of the network parameters, namely inactivity timers and discontinuous reception, on the achievable energy savings and on the radio access network signaling load. The results confirm that traffic shaping is an effective way to save energy, even up to 60 of energy saved when streaming music over LTE. However, we note large differences in the signaling load. LTE with discontinuous reception and long inactivity timer value achieves the energy savings with no extra signaling load, whereas non-standard Fast Dormancy in 3G can multiply the signaling traffic by a factor of ten."
]
}
|
1403.3458
|
2951157548
|
Let @math be a set of @math pairwise-disjoint polygonal obstacles with a total of @math vertices in the plane. We consider the problem of building a data structure that can quickly compute an @math shortest obstacle-avoiding path between any two query points @math and @math . Previously, a data structure of size @math was constructed in @math time that answers each two-point query in @math time, i.e., the shortest path length is reported in @math time and an actual path is reported in additional @math time, where @math is the number of edges of the output path. In this paper, we build a new data structure of size @math in @math time that answers each query in @math time. Note that @math for any constant @math . Further, we extend our techniques to the weighted rectilinear version in which the "obstacles" of @math are rectilinear regions with "weights" and allow @math paths to travel through them with weighted costs. Our algorithm answers each query in @math time with a data structure of size @math that is built in @math time (note that @math for any constant @math ).
|
For the simple polygon case, in which @math is a single simple polygon, all three types of problems have been solved optimally @cite_24 @cite_33 @cite_7 @cite_35 @cite_10 , in both the Euclidean and @math metrics. Specifically, an @math -size data structure can be built in @math time that answers each two-point Euclidean shortest path query in @math time @cite_24 @cite_7 . Since in a simple polygon a Euclidean shortest path is also an @math shortest path @cite_35 , the results in @cite_24 @cite_7 hold for the @math metric as well.
|
{
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_7",
"@cite_24",
"@cite_10"
],
"mid": [
"2081751185",
"2132339863",
"2004884735",
"2038699153",
""
],
"abstract": [
"Abstract In this paper, we show that the universal covering space of a surface can be used to unify previous results on computing paths in a simple polygon. We optimize a given path among obstacles in the plane under the Euclidean and link metrics and under polygonal convex distance functions. Besides revealing connections between the minimum paths under these three distance functions, the framework provided by the universal cover leads to simplified linear-time algorithms for shortest path trees, for minimum-link paths in simple polygons, and for paths restricted to c given orientations.",
"Given a triangulation of a simple polygonP, we present linear-time algorithms for solving a collection of problems concerning shortest paths and visibility withinP. These problems include calculation of the collection of all shortest paths insideP from a given source vertexS to all the other vertices ofP, calculation of the subpolygon ofP consisting of points that are visible from a given segment withinP, preprocessingP for fast \"ray shooting\" queries, and several related problems.",
"Abstract This note describes a new data structure for answering shortest path queries inside a simple polygon. The new data structure has the same asymptotic performance as the previously known data structure (linear preprocessing after triangulation, logarithmic query time), but it is significantly less complicated.",
"Abstract Let P be a simple polygon with n sides. This paper shows how to preprocess the polygon so that, given two query points p and q inside P , the length of the shortest path inside the polygon from p to q can be found in time O (log n ). The path itself must be polygonal and can be extracted in additional time proportional to the number of turns it makes. The preprocessing consists of triangulation plus a linear amount of additional work.",
""
]
}
|
1403.3458
|
2951157548
|
Let @math be a set of @math pairwise-disjoint polygonal obstacles with a total of @math vertices in the plane. We consider the problem of building a data structure that can quickly compute an @math shortest obstacle-avoiding path between any two query points @math and @math . Previously, a data structure of size @math was constructed in @math time that answers each two-point query in @math time, i.e., the shortest path length is reported in @math time and an actual path is reported in additional @math time, where @math is the number of edges of the output path. In this paper, we build a new data structure of size @math in @math time that answers each query in @math time. Note that @math for any constant @math . Further, we extend our techniques to the weighted rectilinear version in which the "obstacles" of @math are rectilinear regions with "weights" and allow @math paths to travel through them with weighted costs. Our algorithm answers each query in @math time with a data structure of size @math that is built in @math time (note that @math for any constant @math ).
|
For the weighted region case, in which the obstacles" allow paths to pass through their interior with weighted costs, Mitchell and Papadimitriou @cite_19 gave an algorithm that finds a weighted Euclidean shortest path in a time of @math times a factor related to the precision of the problem instance. For the weighted rectilinear case, Lee al @cite_25 presented two algorithms for finding a weighted @math shortest path, and Chen al @cite_20 gave an improved algorithm with @math time and @math space. Chen al @cite_20 also presented a data structure for two-point weighted @math shortest path queries among weighted rectilinear obstacles, as mentioned above.
|
{
"cite_N": [
"@cite_19",
"@cite_25",
"@cite_20"
],
"mid": [
"1989723858",
"2147998016",
"1992176147"
],
"abstract": [
"The problem of determining shortest paths through a weighted planar polygonal subdivision with n vertices is considered. Distances are measured according to a weighted Euclidean metric: The length of a path is defined to be the weighted sum of (Euclidean) lengths of the subpaths within each region. An algorithm that constructs a (restricted) “shortest path map” with respect to a given source point is presented. The output is a partitioning of each edge of the subdivion into intervals of e-optimality, allowing an e-optimal path to be traced from the source to any query point along any edge. The algorithm runs in worst-case time O ( ES ) and requires O ( E ) space, where E is the number of “events” in our algorithm and S is the time it takes to run a numerical search procedure. In the worst case, E is bounded above by O ( n 4 ) (and we give an O( n 4 ) lower bound), but it is likeky that E will be much smaller in practice. We also show that S is bounded by O ( n 4 L ), where L is the precision of the problem instance (including the number of bits in the user-specified tolerance e). Again, the value of S should be smaller in practice. The algorithm applies the “continuous Dijkstra” paradigm and exploits the fact that shortest paths obey Snell's Law of Refraction at region boundaries, a local optimaly property of shortest paths that is well known from the analogous optics model. The algorithm generalizes to the multi-source case to compute Voronoi diagrams.",
"We consider a rectilinear shortest path problem among weighted obstacles. Instead of restricting a path to totally avoid obstacles we allow a path to pass through them at extra costs. The extra costs are represented by the weights of the obstacles. We aim to find a shortest rectilinear path between two distinguished points among a set of weighted obstacles. The unweighted case is a special case of this problem when the weight of each obstacle is +∞. By using a graph-theoretical approach, we obtain two algorithms which run in O(n log2 n) time and O(n log n) space and in O(n log3 2 n) time and space, respectively, where n is the number of the vertices of the obstacles.",
"We study the problems of processing single-source and two-point shortest path queries among weighted polygonal obstacles in the rectilinear plane. For the single-source case, we construct a data structure in O(nlog3 2n) time and O(nlog n) space, where n is the number of obstacle vertices; this data structure enables us to report the length of a shortest path between the source and any query point in O(log n) time, and an actual shortest path in O(log n+ k) time, where k is the number of edges on the output path. For the two-point case, we construct a data structure in O(n2 log2n) time and space; this data structure enables us to report the length of a shortest path between two arbitrary query points in O(log2 n) time, and an actual shortest path in O(log2 n + k) time. Our work improves and generalizes the previously best-known results on computing rectilinear shortest paths among weighted polygonal obstacles. We also apply our techniques to processing two-point L1 shortest obstacle-avoiding path queries among arbitrary (i.e., not necessarily rectilinear) polygonal obstacles in the plane. No algorithm for processing two-point shortest path queries among weighted obstacles was previously known."
]
}
|
1403.3562
|
1894726941
|
Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.
|
The RCB algorithm @cite_58 is based on an FPM algorithm called Apriori @cite_26 , which has a worst-case time exponential on the number of attributes. Thus, Apriori and the algorithms based on it are not efficient. It is also noteworthy that Apriori mines frequent itemsets, not closed frequent itemsets. Thus, it produces many redundant biclusters. RCB adopts a two step process. First, all the square submatrices that qualify to be a CTV bicluster are enumerated. Second, these square CTV biclusters are merged to form rectangular CTV biclusters of arbitrary sizes.
|
{
"cite_N": [
"@cite_58",
"@cite_26"
],
"mid": [
"2309290879",
"1484413656"
],
"abstract": [
"Genetic Interaction (GI) data provides a means for exploring the structure and function of pathways in a cell. Coherent value bicliques (submatrices) in GI data represents functionally similar gene modules or protein complexes. However, no systematic approach has been proposed for exhaustively enumerating all coherent value submatrices in such data sets, which is the problem addressed in this paper. Using a monotonic range measure to capture the coherence of values in a submatrix of an input data matrix, we propose a two-step Apriori-based algorithm for discovering all nearly constant value submatrices, referred to as Range Constrained Blocks. By systematic evaluation on an extensive genetic interaction data set, we show that the coherent value submatrices represent groups of genes that are functionally related than the submatrices with diverse values. We also show that our approach can exhaustively find all the submatrices with a range less than a given threshold, while the other competing approaches can not find all such submatrices.",
"We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving thii problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems. We also show how the best features of the two proposed algorithms can be combined into a hybrid algorithm, called AprioriHybrid. Scale-up experiments show that AprioriHybrid scales linearly with the number of transactions. AprioriHybrid also has excellent scale-up properties with respect to the transaction size and the number of items in the database."
]
}
|
1403.3562
|
1894726941
|
Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.
|
The NBS-Miner algorithm @cite_18 mines all maximal CTV biclusters of a numerical dataset. The algorithm starts with the lattice @math (whose bottom is @math and top is @math ), i.e, the lattice containing all possible biclusters. Then, NBS-Miner explores its sublattices using three functions: enumeration, pruning, and propagation. The enumeration function splits recursively the current sublattice into two new sublattices. The prune function is responsible for pruning the sublattices that do not attend to the restriction of similarity (imposed by @math , see Eq. ) or maximality. The propagation function implements the reduction of the size of the search space of a sublattice, not considering the entire current sublattice. The algorithm finds a bicluster when it finds a sublattice whose top is equal to the bottom.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2122720936"
],
"abstract": [
"Thanks to an important research effort the last few years, inductive queries on set patterns and complete solvers which can evaluate them on large 0 1 data sets have been proved extremely useful. However, for many application domains, the raw data is numerical (matrices of real numbers whose dimensions denote objects and properties). Therefore, using efficient 0 1 mining techniques needs for tedious Boolean property encoding phases. This is, e.g., the case, when considering microarray data mining and its impact for knowledge discovery in molecular biology. We consider the possibility to mine directly numerical data to extract collections of relevant bi-sets, i.e., couples of associated sets of objects and attributes which satisfy some user-defined constraints. Not only we propose a new pattern domain but also we introduce a complete solver for computing the so-called numerical bi-sets. Preliminary experimental validation is given."
]
}
|
1403.3562
|
1894726941
|
Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.
|
Kaytoue @cite_32 proposed two FCA-based methods to enumerate CTV biclusters. The former is based on the discretization of the numerical data matrix using @cite_9 . Let @math be the set of values that an object @math can take for an attribute @math . First of all, they compute all @cite_40 from @math . Then, they create one formal context for each class of tolerance and use FCA standard algorithms to enumerate the formal concepts from them. Each formal concept corresponds to a maximal CTV bicluster. The formal contexts are created in a way that avoids finding redundant CTV biclusters, but at a price of not finding some biclusters. Since the resulting binary tables may be numerous depending on the number of elements of @math and the parameter @math , this method is not feasible in many real-world scenarios. The second method is divided in two phases. In the first one, it enumerates all the CVC biclusters using (IPS) @cite_85 . It is noteworthy that this method returns redundant CVC biclusters. In the second phase, CTV biclusters are extracted from the CVC biclusters, but this process is not so clear, because a CVC bicluster can give rise to many CTV biclusters.
|
{
"cite_N": [
"@cite_40",
"@cite_9",
"@cite_85",
"@cite_32"
],
"mid": [
"2136027283",
"1503729935",
"1852340955",
"2115393608"
],
"abstract": [
"This paper shows how to embed a similarity relation between complex descriptions in concept lattices. We formalize similarity by a tolerance relation: objects are grouped within a same concept when having similar descriptions, extending the ability of FCA to deal with complex data. We propose two different approaches. A first classical manner defines a discretization procedure. A second way consists in representing data by pattern structures, from which a pattern concept lattice can be constructed directly. In this case, considering a tolerance relation can be mathematically defined by a projection in a meet-semi-lattice. This allows to use concept lattices for their knowledge representation and reasoning abilities without transforming data. We show finally that resulting lattices are useful for solving information fusion problems.",
"From the Publisher: This is the first textbook on formal concept analysis. It gives a systematic presentation of the mathematical foundations and their relation to applications in computer science, especially in data analysis and knowledge processing. Above all, it presents graphical methods for representing conceptual systems that have proved themselves in communicating knowledge. Theory and graphical representation are thus closely coupled together. The mathematical foundations are treated thoroughly and illuminated by means of numerous examples.",
"Pattern structures consist of objects with descriptions (called patterns) that allow a semilattice operation on them. Pattern structures arise naturally from ordered data, e.g., from labeled graphs ordered by graph morphisms. It is shown that pattern structures can be reduced to formal contexts, however sometimes processing the former is often more efficient and obvious than processing the latter. Concepts, implications, plausible hypotheses, and classifications are defined for data given by pattern structures. Since computation in pattern structures may be intractable, approximations of patterns by means of projections are introduced. It is shown how concepts, implications, hypotheses, and classifications in projected pattern structures are related to those in original ones.",
"A numerical dataset is usually represented by a table where each entry denotes the value taken by an object in line for an attribute in column. A bicluster in a numerical data table is a subtable with close values different from values outside the subtable. Traditionally, largest biclusters were found by means of methods based on linear algebra. We propose an alternative approach based on concept lattices and lattices of interval pattern structures. In other words, this paper shows how formal concept analysis originally tackles the problem of biclustering and provides interesting perspectives of research."
]
}
|
1403.3562
|
1894726941
|
Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.
|
In @cite_23 , the authors also use tolerance classes over the set of numbers @math , and create one formal context for each class of tolerance. But they proposed a new algorithm called TriMax to mine the CTV biclusters from these formal contexts. TriMax is able to perform a complete, correct and non-redundat enumeration of all maximal CTV biclusters in a numerical dataset. But due to the scaling process, TriMax may be not feasible in many real-world scenarios too.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2088347683"
],
"abstract": [
"Biclustering numerical data became a popular data-mining task at the beginning of 2000’s, especially for gene expression data analysis and recommender systems. A bicluster reflects a strong association between a subset of objects and a subset of attributes in a numerical object attribute data-table. So-called biclusters of similar values can be thought as maximal sub-tables with close values. Only few methods address a complete, correct and non-redundant enumeration of such patterns, a well-known intractable problem, while no formal framework exists. We introduce important links between biclustering and Formal Concept Analysis (FCA). Indeed, FCA is known to be, among others, a methodology for biclustering binary data. Handling numerical data is not direct, and we argue that Triadic Concept Analysis (TCA), the extension of FCA to ternary relations, provides a powerful mathematical and algorithmic framework for biclustering numerical data. We discuss hence both theoretical and computational aspects on biclustering numerical data with triadic concept analysis. These results also scale to n-dimensional numerical datasets."
]
}
|
1403.3562
|
1894726941
|
Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.
|
RAP @cite_78 is also based on Apriori @cite_26 . The authors did not describe their strategy to avoid redundancy, but we conjecture that the best that can be done is a pairwise comparison of biclusters with @math and @math columns.
|
{
"cite_N": [
"@cite_26",
"@cite_78"
],
"mid": [
"1484413656",
"2100038249"
],
"abstract": [
"We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving thii problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems. We also show how the best features of the two proposed algorithms can be combined into a hybrid algorithm, called AprioriHybrid. Scale-up experiments show that AprioriHybrid scales linearly with the number of transactions. AprioriHybrid also has excellent scale-up properties with respect to the transaction size and the number of items in the database.",
"The discovery of biclusters, which denote groups of items that show coherent values across a subset of all the transactions in a data set, is an important type of analysis performed on real-valued data sets in various domains, such as biology. Several algorithms have been proposed to find different types of biclusters in such data sets. However, these algorithms are unable to search the space of all possible biclusters exhaustively. Pattern mining algorithms in association analysis also essentially produce biclusters as their result, since the patterns consist of items that are supported by a subset of all the transactions. However, a major limitation of the numerous techniques developed in association analysis is that they are only able to analyze data sets with binary and or categorical variables, and their application to real-valued data sets often involves some lossy transformation such as discretization or binarization of the attributes. In this paper, we propose a novel association analysis framework for exhaustively and efficiently mining \"range support\" patterns from such a data set. On one hand, this framework reduces the loss of information incurred by the binarization- and discretization-based approaches, and on the other, it enables the exhaustive discovery of coherent biclusters. We compared the performance of our framework with two standard biclustering algorithms through the evaluation of the similarity of the cellular functions of the genes constituting the patterns biclusters derived by these algorithms from microarray data. These experiments show that the real-valued patterns discovered by our framework are better enriched by small biologically interesting functional classes. Also, through specific examples, we demonstrate the ability of the RAP framework to discover functionally enriched patterns that are not found by the commonly used biclustering algorithm ISA. The source code and data sets used in this paper, as well as the supplementary material, are available at http: www.cs.umn.edu vk gaurav rap."
]
}
|
1403.3562
|
1894726941
|
Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.
|
pCluster @cite_75 was the first deterministic algorithm with an enumerative approach to mine CHV biclusters. pCluster computes all row-maximal biclusters with two columns and all column-maximal biclusters with two rows, prunes the unpromising biclusters, and stores the remaining column-maximal biclusters in a prefix tree. Then, pCluster makes a depth-first search in this prefix tree in order to mine larger biclusters. However, pCluster has several shortcomings. It does not find all biclusters, can find biclusters that do not attend the user-defined measure of similarity, and returns redundant biclusters. Furthermore, pCluster's computational complexity is exponential w.r.t. the number of attributes.
|
{
"cite_N": [
"@cite_75"
],
"mid": [
"1983524036"
],
"abstract": [
"Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. Our paper introduces an effective algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its effectiveness."
]
}
|
1403.3562
|
1894726941
|
Biclustering has proved to be a powerful data analysis technique due to its wide success in various application domains. However, the existing literature presents efficient solutions only for enumerating maximal biclusters with constant values, or heuristic-based approaches which can not find all biclusters or even support the maximality of the obtained biclusters. Here, we present a general family of biclustering algorithms for enumerating all maximal biclusters with (i) constant values on rows, (ii) constant values on columns, or (iii) coherent values. Versions for perfect and for perturbed biclusters are provided. Our algorithms have four key properties (just the algorithm for perturbed biclusters with coherent values fails to exhibit the first property): they are (1) efficient (take polynomial time per pattern), (2) complete (find all maximal biclusters), (3) correct (all biclusters attend the user-defined measure of similarity), and (4) non-redundant (all the obtained biclusters are maximal and the same bicluster is not enumerated twice). They are based on a generalization of an efficient formal concept analysis algorithm called In-Close2. Experimental results point to the necessity of having efficient enumerative biclustering algorithms and provide a valuable insight into the scalability of our family of algorithms and its sensitivity to user-defined parameters.
|
Maple @cite_4 is an improved version of pCluster and it is closer to an actual enumerative algorithm. It returns only non-redundant biclusters, but it does not have an efficient approach to do this: for each possible new bicluster, Maple must look at all previously generated biclusters to avoid redundancy. Besides, there are two scenarios where Maple fails in performing a complete and correct enumeration of all maximal biclusters. If two biclusters have the same set of objects and share some attributes, Maple would return only one bicluster containing both of them (thus violating the user-defined measure of similarity). Maple also may miss biclusters due to its routine of pruning unpromising biclusters: Maple keeps an attribute-list ordered by some criterion. If a bicluster has a subset of objects and a superset of attributes of another bicluster, and its extra attributes are subsequently considering Maple's attribute-list, Maple would prune it incorrectly. The worst-case time of Maple's search is also exponential w.r.t. the number of attributes.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2171297427"
],
"abstract": [
"Pattern-based clustering is important in many applications, such as DNA micro-array data analysis, automatic recommendation systems and target marketing systems. However, pattern-based clustering in large databases is challenging. On the one hand, there can be a huge number of clusters and many of them can be redundant and thus make the pattern-based clustering ineffective. On the other hand, the previous proposed methods may not be efficient or scalable in mining large databases. We study the problem of maximal pattern-based clustering. Redundant clusters are avoided completely by mining only the maximal pattern-based clusters. MaPle, an efficient and scalable mining algorithm is developed. It conducts a depth-first, divide-and-conquer search and prunes unnecessary branches smartly. Our extensive performance study on both synthetic data sets and real data sets shows that maximal pattern-based clustering is effective. It reduces the number of clusters substantially. Moreover, MaPle is more efficient and scalable than the previously proposed pattern-based clustering methods in mining large databases."
]
}
|
1403.3109
|
1950144410
|
We formulate sparse support recovery as a salient set identification problem and use information-theoretic analyses to characterize the recovery performance and sample complexity. We consider a very general model where we are not restricted to linear models or specific distributions. We state non-asymptotic bounds on recovery probability and a tight mutual information formula for sample complexity. We evaluate our bounds for applications such as sparse linear regression and explicitly characterize effects of correlation or noisy features on recovery performance. We show improvements upon previous work and identify gaps between the performance of recovery algorithms and fundamental information.
|
Unifying framework through Markovianity: Much of the literature on sparse recovery is specialized with tailored algorithms for different problems. For instance, lasso for linear regression @cite_32 @cite_28 , relaxed integer programs for group testing @cite_25 , convex programs for 1-bit quantization @cite_0 , projected gradient descent for sparse regression with noisy and missing data @cite_20 and other general forms of penalization. While all of these problems share an underlying sparse structure, it is conceptually unclear from a purely IT perspective, how they come together from an inference point-of-view. Our Markovian viewpoint of unifies these different sparse problems from an inference perspective.
|
{
"cite_N": [
"@cite_28",
"@cite_32",
"@cite_0",
"@cite_25",
"@cite_20"
],
"mid": [
"2127300249",
"2115447612",
"2964322027",
"2155658392",
"2099210013"
],
"abstract": [
"The problem of consistently estimating the sparsity pattern of a vector β* ∈ RP based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problern dimension p, the number k of nonzero elements in β*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and l∞-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N(0, Σ) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 0, if n > 2(θu + δ)k log(p - k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2(θl - δ)k log(p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Σ = I p×p), we show that θl = θu = 1, so that the precise threshold n = 2k log (p - k) is exactly determined.",
"We consider the fundamental problem of estimating the mean of a vector y=Xβ+z, where X is an n×p design matrix in which one can have far more variables than observations, and z is a stochastic error term -— the so-called “p>n” setup. When β is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate Xβ using a computationally tractable algorithm. We show that, in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error that one would achieve with an oracle supplying perfect information about which variables should and should not be included in the model. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where Xβ is a sparse or nearly sparse superposition of variables, but not in all cases. Our results are nonasymptotic and widely applicable, since they simply require that pairs of predictor variables are not too collinear.",
"This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We demonstrate that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We show that an -sparse signal in can be accurately estimated from m = O(s log(n s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1 2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O (s log (2n s)) Bernoulli trials are sufficient to estimate a coefficient vector in which is approximately -sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.",
"We present computationally efficient and provably correct algorithms with near-optimal sample-complexity for noisy non-adaptive group testing. Group testing involves grouping arbitrary subsets of items into pools. Each pool is then tested to identify the defective items, which are usually assumed to be sparsely distributed. We consider random non-adaptive pooling where pools are selected randomly and independently of the test outcomes. Our noisy scenario accounts for both false negatives and false positives for the test outcomes. Inspired by compressive sensing algorithms we introduce four novel computationally efficient decoding algorithms for group testing, CBP via Linear Programming (CBP-LP), NCBP-LP (Noisy CBP-LP), and the two related algorithms NCBP-SLP+ and NCBP-SLP- (“Simple” NCBP-LP). The first of these algorithms deals with the noiseless measurement scenario, and the next three with the noisy measurement scenario. We derive explicit sample-complexity bounds — with all constants made explicit — for these algorithms as a function of the desired error probability; the noise parameters; the number of items; and the size of the defective set (or an upper bound on it). We show that the sample-complexities of our algorithms are near-optimal with respect to known information-theoretic bounds.",
"Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings."
]
}
|
1403.3109
|
1950144410
|
We formulate sparse support recovery as a salient set identification problem and use information-theoretic analyses to characterize the recovery performance and sample complexity. We consider a very general model where we are not restricted to linear models or specific distributions. We state non-asymptotic bounds on recovery probability and a tight mutual information formula for sample complexity. We evaluate our bounds for applications such as sparse linear regression and explicitly characterize effects of correlation or noisy features on recovery performance. We show improvements upon previous work and identify gaps between the performance of recovery algorithms and fundamental information.
|
Furthermore, prior work relied heavily on the design of sampling matrices with special structures such as Gaussian ensembles and RIP matrices, which is a key difference from the setting we consider herein as for our purpose we do not always have the freedom to design the matrix @math . We do not make explicit assumptions about the structure of the sensing matrix, such as the restricted isometry property @cite_10 or incoherence properties @cite_32 , or about the distribution of the matrix elements, such as sub-Gaussianity. Also, the existing IT bounds, which are largely based on Gaussian ensembles, are limited to the linear CS model, and hence not suitable for the non-linear models we consider herein.
|
{
"cite_N": [
"@cite_10",
"@cite_32"
],
"mid": [
"2380408902",
"2115447612"
],
"abstract": [
"It is now well-known that one can reconstruct sparse or compressible signals accurately from a very limited number of measurements, possibly contaminated with noise. This technique known as sensing\" or sampling\" relies on properties of the sensing matrix such as the restricted isometry property. In this note, we establish new results about the accuracy of the reconstruction from undersampled measurements which improve on earlier estimates, and have the advantage of being more elegant. R",
"We consider the fundamental problem of estimating the mean of a vector y=Xβ+z, where X is an n×p design matrix in which one can have far more variables than observations, and z is a stochastic error term -— the so-called “p>n” setup. When β is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate Xβ using a computationally tractable algorithm. We show that, in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error that one would achieve with an oracle supplying perfect information about which variables should and should not be included in the model. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where Xβ is a sparse or nearly sparse superposition of variables, but not in all cases. Our results are nonasymptotic and widely applicable, since they simply require that pairs of predictor variables are not too collinear."
]
}
|
1403.3109
|
1950144410
|
We formulate sparse support recovery as a salient set identification problem and use information-theoretic analyses to characterize the recovery performance and sample complexity. We consider a very general model where we are not restricted to linear models or specific distributions. We state non-asymptotic bounds on recovery probability and a tight mutual information formula for sample complexity. We evaluate our bounds for applications such as sparse linear regression and explicitly characterize effects of correlation or noisy features on recovery performance. We show improvements upon previous work and identify gaps between the performance of recovery algorithms and fundamental information.
|
Information-theoretic tight error bounds: Through our analysis of the ML decoder, we obtain a tight upper bound on the probability of error of support recovery, in addition to necessary and sufficient conditions on the sample complexity. We compute this upper bound explicitly for popular problems such as sparse linear regression and its variants. We compare the information-theoretic bound to the performance of practical algorithms used to solve the sparse recovery problem, such as lasso @cite_32 @cite_6 or orthogonal matching pursuit (OMP) variants @cite_31 and illustrate large gaps between their performance and our bounds. The presence of these gaps show that there is still room to improve the performance of practical algorithms for solving support recovery problems.
|
{
"cite_N": [
"@cite_31",
"@cite_32",
"@cite_6"
],
"mid": [
"100695655",
"2115447612",
""
],
"abstract": [
"Many models for sparse regression typically assume that the covariates are known completely, and without noise. Particularly in high-dimensional applications, this is often not the case. Worse yet, even estimating statistics of the noise (the noise covariance) can be a central challenge. In this paper we develop a simple variant of orthogonal matching pursuit (OMP) for precisely this setting. We show that without knowledge of the noise covariance, our algorithm recovers the support, and we provide matching lower bounds that show that our algorithm performs at the minimax optimal rate. While simple, this is the first algorithm that (provably) recovers support in a noise-distribution-oblivious manner. When knowledge of the noise-covariance is available, our algorithm matches the best-known l2-recovery bounds available. We show that these too are min-max optimal. Along the way, we also obtain improved performance guarantees for OMP for the standard sparse regression problem with Gaussian noise.",
"We consider the fundamental problem of estimating the mean of a vector y=Xβ+z, where X is an n×p design matrix in which one can have far more variables than observations, and z is a stochastic error term -— the so-called “p>n” setup. When β is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate Xβ using a computationally tractable algorithm. We show that, in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error that one would achieve with an oracle supplying perfect information about which variables should and should not be included in the model. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where Xβ is a sparse or nearly sparse superposition of variables, but not in all cases. Our results are nonasymptotic and widely applicable, since they simply require that pairs of predictor variables are not too collinear.",
""
]
}
|
1403.3305
|
2951754631
|
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively such as hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i.e., the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
|
Designing neural networks to learn a set of patterns and recall them later in the presence of noise has been an active topic of research for the past three decades. Inspired by Hebbian learning @cite_53 , Hopfield introduced an auto-associative neural mechanism of size @math with binary state neurons in which patterns are assumed to be binary vectors of length @math @cite_28 . The capacity of a Hopfield network under vanishing block error probability was later shown to be @math @cite_33 . With the hope of increasing the capacity of the Hopfield network, extensions to non-binary states were explored @cite_16 . In particular, investigated a multi-state complex-valued neural associative memory with estimated capacity less than @math @cite_34 ; showed the capacity with a prohibitively complicated learning rule to increase to @math @cite_35 . Lee proposed the Modified Gradient Descent learning Rule (MGDR) to overcome this drawback @cite_31 .
|
{
"cite_N": [
"@cite_35",
"@cite_33",
"@cite_28",
"@cite_53",
"@cite_34",
"@cite_31",
"@cite_16"
],
"mid": [
"",
"140581176",
"2128084896",
"2059148040",
"2126944419",
"",
"2101222204"
],
"abstract": [
"",
"In his seminal work of 1949, D. O. Hebb proposed that highly local neural activity in a network could result in emergent collective computational properties. This philosophy has attracted legions of adherents and a great many learning algorithms have evolved based on the general Hebbian prescription which may be paraphrased as follows: Strengthen connections between neurons whose activity is correlated.",
"Abstract Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.",
"Contents: Introduction. The Problem and the Line of Attack. Summation and Learning in Perception. Field Theory and Equipotentiality. The First Stage of Perception: Growth of the Assembly. Perception of a Complex: The Phase Sequence. Development of the Learning Capacity. Higher and Lower Processes Related to Learning. Problems of Motivation: Pain and Hunger. The Problem of Motivational Drift. Emotional Disturbances. The Growth and Decline of Intelligence.",
"A model of a multivalued associative memory is presented. This memory has the form of a fully connected attractor neural network composed of multistate complex-valued neurons. Such a network is able to perform the task of storing and recalling gray-scale images. It is also shown that the complex-valued fully connected neural network may be considered as a generalization of a Hopfield network containing real-valued neurons. A computational energy function is introduced and evaluated in order to prove network stability for asynchronous dynamics. Storage capacity as related to the number of accessible neuron states is also estimated.",
"",
"We discuss the long term maintenance of acquired memory in synaptic connections of a perpetually learning electronic device. This is affected by ascribing each synapse a finite number of stable states in which it can maintain for indefinitely long periods. Learning uncorrelated stimuli is expressed as a stochastic process produced by the neural activities on the synapses. In several interesting cases the stochastic process can be analyzed in detail, leading to a clarification of the performance of the network, as an associative memory, during the process of uninterrupted learning. The stochastic nature of the process and the existence of an asymptotic distribution for the synaptic values in the network imply generically that the memory is a palimpsest but capacity is as low as log N for a network of N neurons. The only way we find for avoiding this tight constraint is to allow the parameters governing the learning process (the coding level of the stimuli; the transition probabilities for potentiation and depression and the number of stable synaptic levels) to depend on the number of neurons. It is shown that a network with synapses that have two stable states can dynamically learn with optimal storage efficiency, be a palimpsest, and maintain its (associative) memory for an indefinitely long time provided the coding level is low and depression is equilibrated against potentiation. We suggest that an option so easily implementable in material devices would not have been overlooked by biology. Finally we discuss the stochastic learning on synapses with variable number of stable synaptic states."
]
}
|
1403.3305
|
2951754631
|
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively such as hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i.e., the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
|
The basic memory architecture, learning rule, and recall algorithm used herein is from @cite_37 , which also achieves exponential capacity by capturing internal redundancy by dividing the patterns into smaller clusters, with each subpattern satisfying a set of linear constraints. The problem of learning linear constraints with neural networks was considered in @cite_30 , but without sparsity requirements. This has connections to compressed sensing @cite_54 ; typical compressed sensing recall decoding algorithms are too complicated to be implemented by neural networks, but some have suggested the biological plausibility of message-passing algorithms @cite_52 .
|
{
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_54",
"@cite_52"
],
"mid": [
"",
"2101055774",
"2129638195",
"2112239465"
],
"abstract": [
"",
"The task of a neural associative memory is to retrieve a set of previously memorized patterns from their noisy versions by using a network of neurons. Hence, an ideal network should be able to 1) gradually learn a set of patterns, 2) retrieve the correct pattern from noisy queries and 3) maximize the number of memorized patterns while maintaining the reliability in responding to queries. We show that by considering the inherent redundancy in the memorized patterns, one can obtain all the mentioned properties at once. This is in sharp contrast with previous work that could only improve one or two aspects at the expense of the others. More specifically, we devise an iterative algorithm that learns the redundancy among the patterns. The resulting network has a retrieval capacity that is exponential in the size of the network. Lastly, by considering the local structures of the network, the asymptotic error correction performance can be made linear in the size of the network.",
"Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p , where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K logN)-r, r=1 p-1 2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed",
"Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a low-dimensional signal that drives subsequent nonlinear stages. This paper presents a novel and systematic parameter estimation procedure for such models and applies the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive fields in sensory neurons. The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using a recently-developed generalized approximate message passing (GAMP) method. The GAMP method is based on Gaussian approximations of loopy belief propagation. In the neural connectivity problem, the GAMP-based method is shown to be computational efficient, provides a more exact modeling of the sparsity, can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. For the receptive field estimation, the GAMP method can also exploit inherent structured sparsity in the linear weights. The method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells."
]
}
|
1403.3305
|
2951754631
|
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively such as hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i.e., the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
|
Building on the idea of structured pattern sets @cite_11 , the basic associative memory model used herein @cite_37 relies on the fact all patterns to be learned lie in a low-dimensional subspace. Learning features of a low-dimensional space is very similar to autoencoders @cite_36 . The model also has similarities to Deep Belief Networks (DBNs) and in particular Convolutional Neural Networks @cite_27 , albeit with different objectives. DBNs are made of several consecutive stages, similar to overlapping clusters in our model, where each stage extracts some features and feeds them to the following stage. The output of the last stage is then used for pattern classification. In contrast to DBNs, our associative memory model is not classifying patterns but rather recalling patterns from noisy versions. Also, overlapping clusters can operate in parallel to save time in information diffusion over a staged architecture.
|
{
"cite_N": [
"@cite_36",
"@cite_27",
"@cite_37",
"@cite_11"
],
"mid": [
"2025768430",
"2147860648",
"2101055774",
"2121160181"
],
"abstract": [
"Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked to initialize deep architectures. The algorithm can be motivated from a manifold learning and information theoretic perspective or from a generative model perspective. Comparative experiments clearly show the surprising advantage of corrupting the input of autoencoders on a pattern classification benchmark suite.",
"Convolutional neural networks (CNNs) have been successfully applied to many tasks such as digit and object recognition. Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture. In this paper, we consider the problem of learning invariances, rather than relying on hard-coding. We propose tiled convolution neural networks (Tiled CNNs), which use a regular \"tiled\" pattern of tied weights that does not require that adjacent hidden units share identical weights, but instead requires only that hidden units k steps away from each other to have tied weights. By pooling over neighboring units, this architecture is able to learn complex invariances (such as scale and rotational invariance) beyond translational invariance. Further, it also enjoys much of CNNs' advantage of having a relatively small number of learned parameters (such as ease of learning and greater scalability). We provide an efficient learning algorithm for Tiled CNNs based on Topographic ICA, and show that learning complex invariant features allows us to achieve highly competitive results for both the NORB and CIFAR-10 datasets.",
"The task of a neural associative memory is to retrieve a set of previously memorized patterns from their noisy versions by using a network of neurons. Hence, an ideal network should be able to 1) gradually learn a set of patterns, 2) retrieve the correct pattern from noisy queries and 3) maximize the number of memorized patterns while maintaining the reliability in responding to queries. We show that by considering the inherent redundancy in the memorized patterns, one can obtain all the mentioned properties at once. This is in sharp contrast with previous work that could only improve one or two aspects at the expense of the others. More specifically, we devise an iterative algorithm that learns the redundancy among the patterns. The resulting network has a retrieval capacity that is exponential in the size of the network. Lastly, by considering the local structures of the network, the asymptotic error correction performance can be made linear in the size of the network.",
"Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages that are much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory."
]
}
|
1403.3305
|
2951754631
|
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively such as hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i.e., the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
|
In this work, we reconsider the neural network model of @cite_37 , but introduce internal computation noise consistent with biology. Note that the sparsity of the model architecture is also consistent with biology @cite_1 . We find that there is actually a functional benefit to internal noise.
|
{
"cite_N": [
"@cite_37",
"@cite_1"
],
"mid": [
"2101055774",
"1985392528"
],
"abstract": [
"The task of a neural associative memory is to retrieve a set of previously memorized patterns from their noisy versions by using a network of neurons. Hence, an ideal network should be able to 1) gradually learn a set of patterns, 2) retrieve the correct pattern from noisy queries and 3) maximize the number of memorized patterns while maintaining the reliability in responding to queries. We show that by considering the inherent redundancy in the memorized patterns, one can obtain all the mentioned properties at once. This is in sharp contrast with previous work that could only improve one or two aspects at the expense of the others. More specifically, we devise an iterative algorithm that learns the redundancy among the patterns. The resulting network has a retrieval capacity that is exponential in the size of the network. Lastly, by considering the local structures of the network, the asymptotic error correction performance can be made linear in the size of the network.",
"How different is local cortical circuitry from a random network? To answer this question, we probed synaptic connections with several hundred simultaneous quadruple whole-cell recordings from layer 5 pyramidal neurons in the rat visual cortex. Analysis of this dataset revealed several nonrandom features in synaptic connectivity. We confirmed previous reports that bidirectional connections are more common than expected in a random network. We found that several highly clustered three-neuron connectivity patterns are overrepresented, suggesting that connections tend to cluster together. We also analyzed synaptic connection strength as defined by the peak excitatory postsynaptic potential amplitude. We found that the distribution of synaptic connection strength differs significantly from the Poisson distribution and can be fitted by a lognormal distribution. Such a distribution has a heavier tail and implies that synaptic weight is concentrated among few synaptic connections. In addition, the strengths of synaptic connections sharing pre- or postsynaptic neurons are correlated, implying that strong connections are even more clustered than the weak ones. Therefore, the local cortical network structure can be viewed as a skeleton of stronger connections in a sea of weaker ones. Such a skeleton is likely to play an important role in network dynamics and should be investigated further."
]
}
|
1403.3305
|
2951754631
|
Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in brain regions thought to operate associatively such as hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i.e., the number of stored patterns does not reduce with noise (up to a threshold). Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.
|
Reliably storing information in memory systems constructed completely from unreliable components is a classical problem in fault-tolerant computing @cite_47 @cite_40 @cite_7 , where typical models have used random access architectures with sequential correcting networks. Although direct comparison is difficult since notions of circuit complexity are slightly different, our work also demonstrates that associative memory architectures can store information reliably despite being constructed from unreliable components.
|
{
"cite_N": [
"@cite_40",
"@cite_47",
"@cite_7"
],
"mid": [
"",
"2017795258",
"2140253920"
],
"abstract": [
"",
"This is the first of two papers which consider the theoretical capabilities of computing systems designed from unreliable components. This paper discusses the capabilities of memories; the second paper discusses the capabilities of entire computing systems. Both present existence theorems analogous to the existence theorems of information theory. The fundamental result of information theory is that communication channels have a capacity, C, such that for all information rates less than C, arbitrarily reliable communication can be achieved. In analogy with this result, it is shown that each type of memory has an information storage capacity, C, such that for all memory redundancies greater than 1 C arbitrarily reliable information storage can be achieved. Since memory components malfunction in many different ways, two representative models for component malfunctions are considered. The first is based on the assumption that malfunctions of a particular component are statistically independent from one use to another. The second is based on the assumption that components fail permanently but that bad components are periodically replaced with good ones. In both cases, malfunctions in different components are assumed to be independent. For both models it is shown that there exist memories, constructed entirely from unreliable components of the assumed type, which have nonzero information storage capacities.",
"Departing from traditional communication theory where decoding algorithms are assumed to perform without error, a system where noise perturbs both computational devices and communication channels is considered here. This paper studies limits in processing noisy signals with noisy circuits by investigating the effect of noise on standard iterative decoders for low-density parity-check (LDPC) codes. Concentration of decoding performance around its average is shown to hold when noise is introduced into message-passing and local computation. Density evolution equations for simple faulty iterative decoders are derived. In one model, computing nonlinear estimation thresholds shows that performance degrades smoothly as decoder noise increases, but arbitrarily small probability of error is not achievable. Probability of error may be driven to zero in another system model; the decoding threshold again decreases smoothly with decoder noise. As an application of the methods developed, an achievability result for reliable memory systems constructed from unreliable components is provided."
]
}
|
1403.2639
|
780213831
|
In application domains that are regulated, software vendors must maintain traceability links between the regulatory items and the code base implementing them. In this paper, we present a traceability approach based on the intuition that the regulatory documents and the user-interface of the corresponding software applications are very close. First, they use the same terminology. Second, most important regulatory pieces of information appear in the graphical user-interface because the end-users in those application domains care about the regulation (by construction). We evaluate our approach in the domain of green building. The evaluation involves a domain expert, lead architect of a commercial product within this area. The evaluation shows that the recovered traceability links are accurate.
|
@cite_5 introduced automatic traceability based on Information Retrieval (IR) methods. They compared a Vector Space Model and a probabilistic approach @cite_5 . Both methods produced promising results. Marcus and Maletic @cite_6 explored Latent Semantic Indexing methods obtaining results as good as with less preprocessing. The effectiveness of IR techniques in traceability have been furthered examined by De Lucia @cite_12 and @cite_14 .
|
{
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_12",
"@cite_6"
],
"mid": [
"2128581098",
"2118202700",
"2138378644",
"2163960678"
],
"abstract": [
"Software system documentation is almost always expressed informally in natural language and free text. Examples include requirement specifications, design documents, manual pages, system development journals, error logs, and related maintenance reports. We propose a method based on information retrieval to recover traceability links between source code and free text documents. A premise of our work is that programmers use meaningful names for program items, such as functions, variables, types, classes, and methods. We believe that the application-domain knowledge that programmers process when writing the code is often captured by the mnemonics for identifiers; therefore, the analysis of these mnemonics can help to associate high-level concepts with program concepts and vice-versa. We apply both a probabilistic and a vector space information retrieval model in two case studies to trace C++ source code onto manual pages and Java code to functional requirements. We compare the results of applying the two models, discuss the benefits and limitations, and describe directions for improvements.",
"This paper addresses the issues related to improving the overall quality of the dynamic candidate link generation for the requirements tracing process for verification and validation and independent verification and validation analysts. The contribution of the paper is four-fold: we define goals for a tracing tool based on analyst responsibilities in the tracing process, we introduce several new measures for validating that the goals have been satisfied, we implement analyst feedback in the tracing process, and we present a prototype tool that we built, RETRO (REquirements TRacing On-target), to address these goals. We also present the results of a study used to assess RETRO's support of goals and goal elements that can be measured objectively.",
"The main drawback of existing software artifact management systems is the lack of automatic or semi-automatic traceability link generation and maintenance. We have improved an artifact management system with a traceability recovery tool based on Latent Semantic Indexing (LSI), an information retrieval technique. We have assessed LSI to identify strengths and limitations of using information retrieval techniques for traceability recovery and devised the need for an incremental approach. The method and the tool have been evaluated during the development of seventeen software projects involving about 150 students. We observed that although tools based on information retrieval provide a useful support for the identification of traceability links during software development, they are still far to support a complete semi-automatic recovery of all links. The results of our experience have also shown that such tools can help to identify quality problems in the textual description of traced artifacts.",
"An information retrieval technique, latent semantic indexing, is used to automatically identify traceability links from system documentation to program source code. The results of two experiments to identify links in existing software systems (i.e., the LEDA library, and Albergate) are presented. These results are compared with other similar type experimental results of traceability link identification using different types of information retrieval techniques. The method presented proves to give good results by comparison and additionally it is a low cost, highly flexible method to apply with regards to preprocessing and or parsing of the source code and documentation."
]
}
|
1403.2639
|
780213831
|
In application domains that are regulated, software vendors must maintain traceability links between the regulatory items and the code base implementing them. In this paper, we present a traceability approach based on the intuition that the regulatory documents and the user-interface of the corresponding software applications are very close. First, they use the same terminology. Second, most important regulatory pieces of information appear in the graphical user-interface because the end-users in those application domains care about the regulation (by construction). We evaluate our approach in the domain of green building. The evaluation involves a domain expert, lead architect of a commercial product within this area. The evaluation shows that the recovered traceability links are accurate.
|
One issue encountered by the previous methods relates to terminology mismatch. Code artifacts do not always use the same terminology as in the requirements. Cleland- @cite_8 focused on machine learning approaches including term mining and training using manually extracted tracing matrices to cope with this issue and improve the precision of automated trace-retrieval methods. Our work proposes another approach to address this terminology issue by introducing a novel process based on user interface labels.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2122987719"
],
"abstract": [
"Regulatory standards, designed to protect the safety, security, and privacy of the public, govern numerous areas of software intensive systems. Project personnel must therefore demonstrate that an as-built system meets all relevant regulatory codes. Current methods for demonstrating compliance rely either on after-the-fact audits, which can lead to significant refactoring when regulations are not met, or else require analysts to construct and use traceability matrices to demonstrate compliance. Manual tracing can be prohibitively time-consuming; however automated trace retrieval methods are not very effective due to the vocabulary mismatches that often occur between regulatory codes and product level requirements. This paper introduces and evaluates two machine-learning methods, designed to improve the quality of traces generated between regulatory codes and product level requirements. The first approach uses manually created traceability matrices to train a trace classifier, while the second approach uses web-mining techniques to reconstruct the original trace query. The techniques were evaluated against security regulations from the USA government's Health Insurance Privacy and Portability Act (HIPAA) traced against ten healthcare related requirements specifications. Results demonstrated improvements for the subset of HIPAA regulations that exhibited high fan-out behavior across the requirements datasets."
]
}
|
1403.2639
|
780213831
|
In application domains that are regulated, software vendors must maintain traceability links between the regulatory items and the code base implementing them. In this paper, we present a traceability approach based on the intuition that the regulatory documents and the user-interface of the corresponding software applications are very close. First, they use the same terminology. Second, most important regulatory pieces of information appear in the graphical user-interface because the end-users in those application domains care about the regulation (by construction). We evaluate our approach in the domain of green building. The evaluation involves a domain expert, lead architect of a commercial product within this area. The evaluation shows that the recovered traceability links are accurate.
|
De @cite_12 developed ADAMS, a traceability link recovery tool based on latent semantic indexing to retrieve traceability links. Leuser and Ott use a tool called TraceTool @cite_2 implementing two IR algorithms (tf idf and LSI) to evaluate various optimizations on large specifications written in German. generalized this approach with Traceclipse, an Eclipse plug-in providing a generic platform for integrating traceability link recovery into an IDE @cite_7 . Those authors do not explore the terminological richness of user-interface labels.
|
{
"cite_N": [
"@cite_7",
"@cite_12",
"@cite_2"
],
"mid": [
"2031321083",
"2138378644",
""
],
"abstract": [
"Traceability link recovery is an active research area in software engineering with a number of open research questions and challenges, due to the substantial costs and challenges associated with software maintenance. We propose Traceclipse, an Eclipse plug-in that integrates some similar characteristics of traceability link recovery techniques in one easy-to-use suite. The tool enables software developers to specify, view, and manipulate traceability links within Eclipse and it provides an API through which recovery techniques may be added, specified, and run within an integrated development environment. The paper also presents initial case studies aimed at evaluating the proposed plug-in.",
"The main drawback of existing software artifact management systems is the lack of automatic or semi-automatic traceability link generation and maintenance. We have improved an artifact management system with a traceability recovery tool based on Latent Semantic Indexing (LSI), an information retrieval technique. We have assessed LSI to identify strengths and limitations of using information retrieval techniques for traceability recovery and devised the need for an incremental approach. The method and the tool have been evaluated during the development of seventeen software projects involving about 150 students. We observed that although tools based on information retrieval provide a useful support for the identification of traceability links during software development, they are still far to support a complete semi-automatic recovery of all links. The results of our experience have also shown that such tools can help to identify quality problems in the textual description of traced artifacts.",
""
]
}
|
1403.2716
|
1841923418
|
Abstract—Nature has always been an inspiration to researcherswith its diversity and robustness of its systems, and ArtificialImmune Systems are one of them. Many algorithms were inspiredby ongoing discoveries of biological immune systems techniquesand approaches. One of the basic and most common approachis the Negative Selection Approach, which is simple and easy toimplement. It was applied in many fields, but mostly in anomalydetection for the similarity of its basic idea. In this paper, areview is given on the application of negative selection approachin network security, specifically the intrusion detection system.As the work in this field is limited, we need to understand whatthe challenges of this approach are. Recommendations are givenby the end of the paper for future work. I. I NTRODUCTION Networks are more vulnerable by time to intrusions andattacks, from inside and outside. Cyber-attacks are makingnews headlines worldwide, as threats to networks are gettingbolder and more sophisticated. Reports of 2011 and 2012are showing an increase in network attacks, with Denial ofService (DoS) and targeted attacks having a big share in it.As reported by many web sites like [1] [2] [3], figures 1 and 2show motivations behind attacks and targeted customer typesrespectively.Internal threats and Advanced Persistent Threats (APT)are the biggest threats to a network, as they are carefullyconstructed and dangerous, due to internal users’ privilegesto access network resources. Figure 3 shows internal networksecurity concerns. With this in mind, and the increasing so-phistication of attacks, new approaches to protect the networkresources are always under investigation, and the one thatis concerned with inside and outside threats is the IntrusionDetection System.Intrusion detection systems [4] [5] [6] have been around forquite some time, as a successful security system. An IntrusionDetection System (IDS) is a system that defines and detectspossible threats within a computer or a network, by gatheringand analysing information from the surrounding environment.
|
Gonzalez submitted a PhD thesis @cite_18 in 2002, which was investigating into NSA and proposed new detector generating algorithms for different representation schemes of NSA. In 2003 @cite_19 gave a review over AIS techniques and researches made from 1999 to 2003 on AIS work and applications. Timmis discussed in @cite_0 the challenges AIS applications may face, to take into consideration. He concluded the following challenges of AISs as (1) the need of more interaction with immunologists and mathematicians for the creation of useful models through experimentation, (2) theoretical and formal basis for AIS is required to understand the nature of AIS and the best and more fitting application of it, and (3) more interaction between immune systems with other systems is essential and more attention should be paid to integrations for better functioning.
|
{
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_18"
],
"mid": [
"2976610980",
"",
"193291750"
],
"abstract": [
"In this position paper, we argue that the field of Artificial Immune Systems (AIS) has reached an impass. For many years, immune inspired algorithms, whilst having some degree of success, have been limited by the lack of theorectical advances, the adoption of a limited immune inspired approach and the limited application of AIS to hard problems.",
"",
"The main goal of this research is to examine and to improve the anomaly detection function of artificial immune systems, specifically the negative selection algorithm and other self non-self recognition techniques. This research investigates different representation schemes for the negative selection and proposes new detector generation algorithms suitable for such representations. Accordingly, different representations are explored: hyper-rectangles (which can be interpreted as rules), fuzzy rules, and hyper-spheres. Four different detector generation algorithms are proposed: Negative Selection with Detection Rules (NSDR, an evolutionary algorithm to generate hypercube detectors), Negative Selection with Fuzzy Detection Rules (NSFDR, an evolutionary algorithm to generate fuzzy-rule detectors), Real-valued Negative Selection (RNS, a heuristic algorithm to generate hyper-spherical detectors), and Randomized Real-valued Negative Selection (RRNS, an algorithm for generating hyper-spherical detectors based on Monte Carlo methods). Also, a hybrid immune learning algorithm, which combines RNS (or RRNS) and classification algorithms is developed. This algorithm allows the application of a supervised learning technique even when samples from only one class (normal) are available. Different experiments are performed with synthetic and real world data from different sources. The experimental results show that the proposed representations along with the proposed algorithms provide some advantages over the binary negative selection algorithm. The most relevant advantages include improved scalability, more expressiveness that allows the extraction of high-level domain knowledge, non-crisp distinction between normal and abnormal, and better performance in anomaly detection."
]
}
|
1403.2716
|
1841923418
|
Abstract—Nature has always been an inspiration to researcherswith its diversity and robustness of its systems, and ArtificialImmune Systems are one of them. Many algorithms were inspiredby ongoing discoveries of biological immune systems techniquesand approaches. One of the basic and most common approachis the Negative Selection Approach, which is simple and easy toimplement. It was applied in many fields, but mostly in anomalydetection for the similarity of its basic idea. In this paper, areview is given on the application of negative selection approachin network security, specifically the intrusion detection system.As the work in this field is limited, we need to understand whatthe challenges of this approach are. Recommendations are givenby the end of the paper for future work. I. I NTRODUCTION Networks are more vulnerable by time to intrusions andattacks, from inside and outside. Cyber-attacks are makingnews headlines worldwide, as threats to networks are gettingbolder and more sophisticated. Reports of 2011 and 2012are showing an increase in network attacks, with Denial ofService (DoS) and targeted attacks having a big share in it.As reported by many web sites like [1] [2] [3], figures 1 and 2show motivations behind attacks and targeted customer typesrespectively.Internal threats and Advanced Persistent Threats (APT)are the biggest threats to a network, as they are carefullyconstructed and dangerous, due to internal users’ privilegesto access network resources. Figure 3 shows internal networksecurity concerns. With this in mind, and the increasing so-phistication of attacks, new approaches to protect the networkresources are always under investigation, and the one thatis concerned with inside and outside threats is the IntrusionDetection System.Intrusion detection systems [4] [5] [6] have been around forquite some time, as a successful security system. An IntrusionDetection System (IDS) is a system that defines and detectspossible threats within a computer or a network, by gatheringand analysing information from the surrounding environment.
|
In @cite_39 , a review of different AIS approaches to intrusion detection systems along with the implementations of different algorithms and their results were given. They discuss it from the point of view that most of the techniques followed to build an IDS are not able to cope with the dynamic and complex nature of computer systems security. In 2009, @cite_3 presented a literature review on the recent years work of malicious activity detection methods using AIS, and the available platforms and research projects in this area. In @cite_41 , they presented a review of CI core methods and their application in intrusion detection. The most applied CI approaches in intrusion detection are: Artificial Neural Networks (ANN) @cite_20 @cite_15 , Fuzzy sets @cite_11 @cite_5 , Evolutionary Computation (EC) @cite_40 , AIS, and Swarm Intelligence (SI) @cite_29 . In 2010, multiple studies were made @cite_30 @cite_37 @cite_4 giving a review on recent work and advances in AIS and their applications.
|
{
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_41",
"@cite_29",
"@cite_3",
"@cite_39",
"@cite_40",
"@cite_5",
"@cite_15",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2127470057",
"2082443476",
"2139669429",
"",
"2243197955",
"2084543849",
"1978970913",
"2096768134",
"2105443690",
"1550570921",
"1596097493"
],
"abstract": [
"",
"Artificial Immune Systems (AIS) are computational paradigms that belong to the computational intelligence family and are inspired by the biological immune system. During the past decade, they have attracted a lot of interest from researchers aiming to develop immune-based models and techniques to solve complex computational or engineering problems. This work presents a survey of existing AIS models and algorithms with a focus on the last five years.",
"The immune system is a remarkable information processing and self learning system that offers inspiration to build artificial immune system (AIS). The field of AIS has obtained a significant degree of success as a branch of Computational Intelligence since it emerged in the 1990s. This paper surveys the major works in the AIS field, in particular, it explores up-to-date advances in applied AIS during the last few years. This survey has revealed that recent research is centered on four major AIS algorithms: (1) negative selection algorithms; (2) artificial immune networks; (3) clonal selection algorithms; (4) Danger Theory and dendritic cell algorithms. However, other aspects of the biological immune system are motivating computer scientists and engineers to develop new models and problem solving methods. Though an extensive amount of AIS applications has been developed, the success of these applications is still limited by the lack of any exemplars that really stand out as killer AIS applications.",
"Intrusion detection based upon computational intelligence is currently attracting considerable interest from the research community. Characteristics of computational intelligence (CI) systems, such as adaptation, fault tolerance, high computational speed and error resilience in the face of noisy information, fit the requirements of building a good intrusion detection model. Here we want to provide an overview of the research progress in applying CI methods to the problem of intrusion detection. The scope of this review will encompass core methods of CI, including artificial neural networks, fuzzy systems, evolutionary computation, artificial immune systems, swarm intelligence, and soft computing. The research contributions in each field are systematically summarized and compared, allowing us to clearly define existing research challenges, and to highlight promising new research directions. The findings of this review should provide useful insights into the current IDS literature and be a good source for anyone who is interested in the application of CI approaches to IDSs or related fields.",
"",
"This paper presents a system for detecting intrusions when analyzing the network traffic payload looking for malware evidences. The system implements the detection algorithm as a Snort preprocessor component. Since they work together, a highly effective system against known attacks has been achieved (based on Snort rules) and a highly effective system against unknown threats (which was the main aim of the designed system). As the majority of such systems, the proposal consists of two phases: a training phase and a detection phase. During the training phase a statistical model of the legitimate network usage is created through Bloom Filters and N-grams techniques. Subsequently, the results obtained by analyzing a dataset of attacks are compared with such model. This will allow a set of rules to be developed which will be able to determine whether the packets payloads contain malware. In the detection phase, the traffic to analyze is compared with the model created in the training phase and the results obtained when applying rules. The performed experiments showed really satisfactory results, with 100 malware detection and just 0.15 false positives.",
"The use of artificial immune systems in intrusion detection is an appealing concept for two reasons. First, the human immune system provides the human body with a high level of protection from invading pathogens, in a robust, self-organised and distributed manner. Second, current techniques used in computer security are not able to cope with the dynamic and increasingly complex nature of computer systems and their security. It is hoped that biologically inspired approaches in this area, including the use of immune-based systems will be able to meet this challenge. Here we review the algorithms used, the development of the systems and the outcome of their implementation. We provide an introduction and analysis of the key developments within this field, in addition to making suggestions for future research.",
"From the Publisher: In this revised and significantly expanded second edition, distinguished scientist David B. Fogel presents the latest advances in both the theory and practice of evolutionary computation to help you keep pace with developments in this fast-changing field.. \"In-depth and updated, Evolutionary Computation shows you how to use simulated evolution to achieve machine intelligence. You will gain current insights into the history of evolutionary computation and the newest theories shaping research. Fogel carefully reviews the \"no free lunch theorem\" and discusses new theoretical findings that challenge some of the mathematical foundations of simulated evolution. This second edition also presents the latest game-playing techniques that combine evolutionary algorithms with neural networks, including their success in playing competitive checkers. Chapter by chapter, this comprehensive book highlights the relationship between learning and intelligence.. \"Evolutionary Computation features an unparalleled integration of history with state-of-the-art theory and practice for engineers, professors, and graduate students of evolutionary computation and computer science who need to keep up-to-date in this developing field.",
"Fuzzy Set Theory - And Its Applications, Third Edition is a textbook for courses in fuzzy set theory. It can also be used as an introduction to the subject. The character of a textbook is balanced with the dynamic nature of the research in the field by including many useful references to develop a deeper understanding among interested readers. The book updates the research agenda (which has witnessed profound and startling advances since its inception some 30 years ago) with chapters on possibility theory, fuzzy logic and approximate reasoning, expert systems, fuzzy control, fuzzy data analysis, decision making and fuzzy set models in operations research. All chapters have been updated. Exercises are included.",
"Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological engineering.",
"Various neural learning procedures have been proposed by different researchers in order to adapt suitable controllable parameters of neural network architectures. These can be from simple Hebbian procedures to complicated algorithms applied to individual neurons or assemblies in a neural structure. The paper presents an organized review of various learning techniques, classified according to basic characteristics such as chronology, applicability, functionality, stochasticity etc. Some of the learning procedures that have been used for the training of generic and specific neural structures, and will be reviewed are: Hebbian-like (Grossberg, Sejnowski, Sutton, Bienenstock, Oja & Karhunen, Sanger, , Hasselmo, Kosko, Cheung & Omidvar), Reinforcement learning, Min-max learning, Stochastic learning, Genetics-based learning, Artificial life-based learning. The various learning procedures will be critically compared, and future trends will be highlighted.",
"This book consists of selected papers written by the founder of fuzzy set theory, Lotfi A Zadeh. Since Zadeh is not only the founder of this field, but has also been the principal contributor to its development over the last 30 years, the papers contain virtually all the major ideas in fuzzy set theory, fuzzy logic, and fuzzy systems in their historical context. Many of the ideas presented in the papers are still open to further development. The book is thus an important resource for anyone interested in the areas of fuzzy set theory, fuzzy logic, and fuzzy systems, as well as their applications. Moreover, the book is also intended to play a useful role in higher education, as a rich source of supplementary reading in relevant courses and seminars.The book contains a bibliography of all papers published by Zadeh in the period 1949-1995. It also contains an introduction that traces the development of Zadeh's ideas pertaining to fuzzy sets, fuzzy logic, and fuzzy systems via his papers. The ideas range from his 1965 seminal idea of the concept of a fuzzy set to ideas reflecting his current interest in computing with words — a computing in which linguistic expressions are used in place of numbers.Places in the papers, where each idea is presented can easily be found by the reader via the Subject Index."
]
}
|
1403.2912
|
2152202694
|
We develop a new transmission scheme for additive white Gaussian noisy (AWGN) channels based on Fuchsian groups from rational quaternion algebras. The structure of the proposed Fuchsian codes is nonlinear and nonuniform, hence conventional decoding methods based on linearity and symmetry do not apply. Previously, only brute force decoding methods with complexity that is linear in the code size exist for general nonuniform codes. However, the properly discontinuous character of the action of the Fuchsian groups on the complex upper half-plane translates into decoding complexity that is logarithmic in the code size via a recently introduced point reduction algorithm.
|
Although codes related to Fuchsian groups have been considered before, our construction is original in that it describes the complete construction and decoding process, whereas earlier work has largely concentrated on the constellation design while giving little attention to the decoding and performance aspects. Another key difference to the aforementioned works is that we are studying codes on the arising from quaternion algebras and Fuchsian groups, and our aim is to apply the codes to the classical (euclidean) channel models such as the aforementioned AWGN channel, with possible future extension to fading channels @cite_2 @cite_23 . We do not use hyperbolic metric as our design metric, but use the Fuchsian group as a starting point to the code generation. Nevertheless, our decoder will rely on hyperbolic geometry as opposed to the classical decoders based on euclidean geometry.
|
{
"cite_N": [
"@cite_23",
"@cite_2"
],
"mid": [
"2137141767",
"2053275186"
],
"abstract": [
"Multiple antennas at both the transmitter and receiver ends of a wireless digital transmission channel may increase both data rate and reliability. Reliable high rate transmission over such channels can only be achieved through Space-Time coding. Rank and determinant code design criteria have been proposed to enhance diversity and coding gain. The special case of full-diversity criterion requires that the difference of any two distinct codewords has full rank. Extensive work has been done on Space–Time coding, aiming at finding fully diverse codes with high rate. Division algebras have been proposed as a new tool for constructing Space–Time codes, since they are non-commutative algebras that naturally yield linear fully diverse codes. Their algebraic properties can thus be further exploited to improve the design of good codes. The aim of this work is to provide a tutorial introduction to the algebraic tools involved in the design of codes based on cyclic division algebras. The different design criteria involved will be illustrated, including the constellation shaping, the information lossless property, the non-vanishing determinant property, and the diversity multiplexing trade-off. The final target is to give the complete mathematical background underlying the construction of the Golden code and the other Perfect Space–Time block codes.",
"Algebraic number theory is having an increasing impact in code design for many different coding applications, such as single antenna fading channels and more recently, MIMO systems. Extended work has been done on single antenna fading channels, and algebraic lattice codes have been proven to be an effective tool. The general framework has been settled in the last ten years and many explicit code constructions based on algebraic number theory are now available.The aim of this work is to provide both an overview on algebraic lattice code designs for Rayleigh fading channels, as well as a tutorial introduction to algebraic number theory. The basic facts of this mathematical field will be illustrated by many examples and by the use of a computer algebra freeware in order to make it more accessible to a large audience."
]
}
|
1403.2763
|
2950611619
|
Many databases on the web are "hidden" behind (i.e., accessible only through) their restrictive, form-like, search interfaces. Recent studies have shown that it is possible to estimate aggregate query answers over such hidden web databases by issuing a small number of carefully designed search queries through the restrictive web interface. A problem with these existing work, however, is that they all assume the underlying database to be static, while most real-world web databases (e.g., Amazon, eBay) are frequently updated. In this paper, we study the novel problem of estimating tracking aggregates over dynamic hidden web databases while adhering to the stringent query-cost limitation they enforce (e.g., at most 1,000 search queries per day). Theoretical analysis and extensive real-world experiments demonstrate the effectiveness of our proposed algorithms and their superiority over baseline solutions (e.g., the repeated execution of algorithms designed for static web databases).
|
Information Integration and Extraction for Hidden databases: A significant body of research has been done in this field - see tutorials @cite_30 @cite_28 . Due to limitations of space, we only list a few closely-related work: @cite_6 proposes a crawling solution. Parsing and understanding web query interfaces was extensively studied (e.g., @cite_20 @cite_3 ). The mapping of attributes across web interfaces was studied in @cite_27 .
|
{
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_3",
"@cite_6",
"@cite_27",
"@cite_20"
],
"mid": [
"2015432483",
"",
"",
"2170188121",
"2027780984",
"2139259611"
],
"abstract": [
"We have witnessed the rapid growth of the Web-- It has not only \"broadened\" but also \"deepened\": While the \"surface Web\" has expanded from the 1999 estimate of 800 million to the recent 19.2 billion pages reported by Yahoo index, an equally or even more significant amount of information is hidden on the \"deep Web,\" behind query forms, recently estimated at over 1.2 million, of online databases. Accessing the information on the Web thus requires not only search to locate pages of interests, from the surface Web, but also integration to aggregate data from alternative or complementary sources, from the deep Web. Although the opportunities are unprecedented, the challenges are also immense: On the one hand, for the surface Web, while search seems to have evolved into a standard technology, its maturity and pervasiveness have also invited the attack of spam and the demand of personalization. On the other hand, for the deep Web, while the proliferation of structured sources has promised unlimited possibilities for more precise and aggregated access, it has also presented new challenges for realizing large scale and dynamic information integration. These issues are in essence related to data management, in a large scale, and thus present novel problems and interesting opportunities for our research community. This tutorial will discuss the new access scenarios and research problems in Web information access: from search of the surface Web to integration of the deep Web.",
"",
"",
"Current-day crawlers retrieve content only from the publicly indexable Web, i.e., the set of Web pages reachable purely by following hypertext links, ignoring search forms and pages that require authorization or prior registration. In particular, they ignore the tremendous amount of high quality content “hidden” behind search forms, in large searchable electronic databases. In this paper, we address the problem of designing a crawler capable of extracting content from this hidden Web. We introduce a generic operational model of a hidden Web crawler and describe how this model is realized in HiWE (Hidden Web Exposer), a prototype crawler built at Stanford. We introduce a new Layout-based Information Extraction Technique (LITE) and demonstrate its use in automatically extracting semantic information from search forms and response pages. We also present results from experiments conducted to test and validate our techniques.",
"To enable information integration, schema matching is a critical step for discovering semantic correspondences of attributes across heterogeneous sources. While complex matchings are common, because of their far more complex search space, most existing techniques focus on simple 1:1 matchings. To tackle this challenge, this paper takes a conceptually novel approach by viewing schema matching as correlation mining, for our task of matching Web query interfaces to integrate the myriad databases on the Internet. On this \"deep Web,\" query interfaces generally form complex matchings between attribute groups (e.g., [author] corresponds to [first name, last name] in the Books domain). We observe that the co-occurrences patterns across query interfaces often reveal such complex semantic relationships: grouping attributes (e.g., [first name, last name]) tend to be co-present in query interfaces and thus positively correlated. In contrast, synonym attributes are negatively correlated because they rarely co-occur. This insight enables us to discover complex matchings by a correlation mining approach. In particular, we develop the DCM framework, which consists of data preparation, dual mining of positive and negative correlations, and finally matching selection. Unlike previous correlation mining algorithms, which mainly focus on finding strong positive correlations, our algorithm cares both positive and negative correlations, especially the subtlety of negative correlations, due to its special importance in schema matching. This leads to the introduction of a new correlation measure, @math -measure, distinct from those proposed in previous work. We evaluate our approach extensively and the results show good accuracy for discovering complex matchings.",
"Much data in the Web is hidden behind Web query interfaces. In most cases the only means to \"surface\" the content of a Web database is by formulating complex queries on such interfaces. Applications such as Deep Web crawling and Web database integration require an automatic usage of these interfaces. Therefore, an important problem to be addressed is the automatic extraction of query interfaces into an appropriate model. We hypothesize the existence of a set of domain-independent \"commonsense design rules\" that guides the creation of Web query interfaces. These rules transform query interfaces into schema trees. In this paper we describe a Web query interface extraction algorithm, which combines HTML tokens and the geometric layout of these tokens within a Web page. Tokens are classified into several classes out of which the most significant ones are text tokens and field tokens. A tree structure is derived for text tokens using their geometric layout. Another tree structure is derived for the field tokens. The hierarchical representation of a query interface is obtained by iteratively merging these two trees. Thus, we convert the extraction problem into an integration problem. Our experiments show the promise of our algorithm: it outperforms the previous approaches on extracting query interfaces on about 6.5 in accuracy as evaluated over three corpora with more than 500 Deep Web interfaces from 15 different domains."
]
}
|
1403.2763
|
2950611619
|
Many databases on the web are "hidden" behind (i.e., accessible only through) their restrictive, form-like, search interfaces. Recent studies have shown that it is possible to estimate aggregate query answers over such hidden web databases by issuing a small number of carefully designed search queries through the restrictive web interface. A problem with these existing work, however, is that they all assume the underlying database to be static, while most real-world web databases (e.g., Amazon, eBay) are frequently updated. In this paper, we study the novel problem of estimating tracking aggregates over dynamic hidden web databases while adhering to the stringent query-cost limitation they enforce (e.g., at most 1,000 search queries per day). Theoretical analysis and extensive real-world experiments demonstrate the effectiveness of our proposed algorithms and their superiority over baseline solutions (e.g., the repeated execution of algorithms designed for static web databases).
|
Aggregate Query Processing over Dynamic Databases: There has been extensive work on approximate aggregate query processing over databases using sampling based techniques @cite_11 @cite_10 @cite_8 and non sampling based techniques such as histograms @cite_18 and wavelets @cite_9 . See @cite_26 for a survey. A common approach is to build a synopsis of the database or data stream and use it for aggregate estimation. Maintenance of statistical aggregates in the presence of database updates have been considered in @cite_32 @cite_22 @cite_7 . Another related area is answering continuous aggregate queries which are evaluated continuously over stream data @cite_29 @cite_12 . A major difference with prior work is that the changes to underlying database is not known to our algorithm and we could also perform trans-round aggregate estimates.
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_29",
"@cite_32",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2096021368",
"1586825695",
"2081189989",
"2148706674",
"2169982181",
"2110557355",
"2001474264",
"2039100193",
"",
"2132520482",
"2078907037"
],
"abstract": [
"Summary form only given. The last few years have witnessed a significant increase in the use of databases for complex data analysis (OLAP) applications. These applications often require very quick responses from the DBMS. However, they also involve complex queries on large volumes of data. Despite significant improvement in database support for OLAP over the last few years, most DBMSs still fall short of providing quick enough responses. We present a novel solution to this problem: we use small amounts of precomputed summary statistics of the data to answer the queries quickly, albeit approximately. Our hypothesis is that many OLAP applications can tolerate approximations in query results in return for huge response time reductions. The work is part of our efforts to build an efficient data analysis system called AQUA. We describe some of the technical problems addressed in this effort.",
"2 Garofalakis & Gibbons, VLDB 2001 # Outline • Intro & Approximate Query Answering Overview – Synopses, System architecture, Commercial offerings • One-Dimensional Synopses – Histograms, Samples, Wavelets • Multi-Dimensional Synopses and Joins – Multi-D Histograms, Join synopses, Wavelets • Set-Valued Queries – Using Histograms, Samples, Wavelets • Advanced Techniques & Future Directions – Streaming Data, Dependency-based, Work-load tuned • Conclusions Approximate Query Processing: Taming the Terabytes Garofalakis, Gibbons",
"In many applications from telephone fraud detection to network management, data arrives in a stream, and there is a need to maintain a variety of statistical summary information about a large number of customers in an online fashion. At present, such applications maintain basic aggregates such as running extrema values (MIN, MAX), averages, standard deviations, etc., that can be computed over data streams with limited space in a straightforward way. However, many applications require knowledge of more complex aggregates relating different attributes, so-called correlated aggregates. As an example, one might be interested in computing the percentage of international phone calls that are longer than the average duration of a domestic phone call. Exact computation of this aggregate requires multiple passes over the data stream, which is infeasible. We propose single-pass techniques for approximate computation of correlated aggregates over both landmark and sliding window views of a data stream of tuples, using a very limited amount of space. We consider both the case where the independent aggregate (average duration in the example above) is an extrema value and the case where it is an average value, with any standard aggregate as the dependent aggregate; these can be used as building blocks for more sophisticated aggregates. We present an extensive experimental study based on some real and a wide variety of synthetic data sets to demonstrate the accuracy of our techniques. We show that this effectiveness is explained by the fact that our techniques exploit monotonicity and convergence properties of aggregates over data streams.",
"Recent years have witnessed an increasing interest in designing algorithms for querying and analyzing streaming data (i.e., data that is seen only once in a fixed order) with only limited memory. Providing (perhaps approximate) answers to queries over such continuous data streams is a crucial requirement for many application environments; examples include large telecom and IP network installations where performance data from different parts of the network needs to be continuously collected and analyzed.In this paper, we consider the problem of approximately answering general aggregate SQL queries over continuous data streams with limited memory. Our method relies on randomizing techniques that compute small \"sketch\" summaries of the streams that can then be used to provide approximate answers to aggregate queries with provable guarantees on the approximation error. We also demonstrate how existing statistical information on the base data (e.g., histograms) can be used in the proposed framework to improve the quality of the approximation provided by our algorithms. The key idea is to intelligently partition the domain of the underlying attribute(s) and, thus, decompose the sketching problem in a way that provably tightens our guarantees. Results of our experimental study with real-life as well as synthetic data streams indicate that sketches provide significantly more accurate answers compared to histograms for aggregate queries. This is especially true when our domain partitioning methods are employed to further boast the accuracy of the final estimates.",
"Studies the problem of approximately answering aggregation queries using sampling. We observe that uniform sampling performs poorly when the distribution of the aggregated attribute is skewed. To address this issue, we introduce a technique called outlier indexing. Uniform sampling is also ineffective for queries with low selectivity. We rely on weighted sampling based on workload information to overcome this shortcoming. We demonstrate that a combination of outlier indexing with weighted sampling can be used to answer aggregation queries with a significantly reduced approximation error compared to either uniform sampling or weighted sampling alone. We discuss the implementation of these techniques on Microsoft's SQL Server and present experimental results that demonstrate the merits of our techniques.",
"Approximate query processing has emerged as a cost-effective approach for dealing with the huge data volumes and stringent response-time requirements of today's decision support systems (DSS). Most work in this area, however, has so far been limited in its query processing scope, typically focusing on specific forms of aggregate queries. Furthermore, conventional approaches based on sampling or histograms appear to be inherently limited when it comes to approximating the results of complex queries over high-dimensional DSS data sets. In this paper, we propose the use of multi-dimensional wavelets as an effective tool for general-purpose approximate query processing in modern, high-dimensional applications. Our approach is based on building wavelet-coefficient synopses of the data and using these synopses to provide approximate answers to queries. We develop novel query processing algorithms that operate directly on the wavelet-coefficient synopses of relational tables, allowing us to process arbitrarily complex queries entirely in the wavelet-coefficient domain. This guarantees extremely fast response times since our approximate query execution engine can do the bulk of its processing over compact sets of wavelet coefficients, essentially postponing the expansion into relational tuples until the end-result of the query. We also propose a novel wavelet decomposition algorithm that can build these synopses in an I O-efficient manner. Finally, we conduct an extensive experimental study with synthetic as well as real-life data sets to determine the effectiveness of our wavelet-based approach compared to sampling and histograms. Our results demonstrate that our techniques: (1) provide approximate answers of better quality than either sampling or histograms; (2) offer query execution-time speedups of more than two orders of magnitude; and (3) guarantee extremely fast synopsis construction times that scale linearly with the size of the data.",
"In this overview paper we motivate the need for and research issues arising from a new model of data processing. In this model, data does not take the form of persistent relations, but rather arrives in multiple, continuous, rapid, time-varying data streams. In addition to reviewing past work relevant to data stream systems and current projects in the area, the paper explores topics in stream query languages, new requirements and challenges in query processing, and algorithmic issues.",
"",
"",
"In many recent applications, data may take the form of continuous data streams, rather than finite stored data sets. Several aspects of data management need to be reconsidered in the presence of data streams, offering a new research direction for the database community. In this paper we focus primarily on the problem of query processing, specifically on how to define and evaluate continuous queries over data streams. We address semantic issues as well as efficiency concerns. Our main contributions are threefold. First, we specify a general and flexible architecture for query processing in the presence of data streams. Second, we use our basic architecture as a tool to clarify alternative semantics and processing techniques for continuous queries. The architecture also captures most previous work on continuous queries and data streams, as well as related concepts such as triggers and materialized views. Finally, we map out research topics in the area of query processing over data streams, showing where previous work is relevant and describing problems yet to be addressed.",
"The ability to approximately answer aggregation queries accurately and efficiently is of great benefit for decision support and data mining tools. In contrast to previous sampling-based studies, we treat the problem as an optimization problem whose goal is to minimize the error in answering queries in the given workload. A key novelty of our approach is that we can tailor the choice of samples to be robust even for workloads that are “similar” but not necessarily identical to the given workload. Finally, our techniques recognize the importance of taking into account the variance in the data distribution in a principled manner. We show how our solution can be implemented on a database system, and present results of extensive experiments on Microsoft SQL Server 2000 that demonstrate the superior quality of our method compared to previous work."
]
}
|
1403.2819
|
2949189587
|
In many cases, it is more profitable to apply existing methodologies than to develop new ones. This holds, especially, for system development within the cyber-physical domain: until a certain abstraction level we can (re)use the methodologies for the software system development to benefit from the advantages of these techniques.
|
There are many approaches on mechatronic cyber-physical systems, however, most of them do not focus on the logical level of the system representation and loose the advantages of the abstract respresentation: a better overview, possibility to validate the system on the earlier phases, etc. For instance, the work presented in @cite_17 defines an extensive support to the components communication and time requirements, while the model discussed in @cite_12 proposes a complete model of the processes with communication. Nevertheless, in our opinion one limitation of such approaches is that the system is represented with a flat view, that is, there is only a single abstraction level for represent it. That could be a disadvantage in the project of a cyber-physical system, where experts of different domains should be able to cooperate and work to different views and abstraction levels of the system. Modeling theories for distributed hybrid system as SHIFT @cite_5 and R-Charon @cite_14 guarantee a complete simulation and compilation of the models, but they have no verification support. The same limitation is for UPPAAL @cite_16 and PHAVer @cite_9 , which provide the simulation, but a limited verification with restricted dynamics and only for small fragments.
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"1536701116",
"2126489030",
"2252436646",
"1962072139",
"2152414327",
""
],
"abstract": [
"This paper describes the modeling language R-Charon as an extension for architectural reconfiguration to the existing distributed hybrid system modeling language Charon. The target application domain of R-Charon includes but is not limited to modular reconfigurable robots and large-scale transportation systems.While largely leaving the Charon syntax and semantics intact, R-Charon allows dynamic creation and destruction of components (agents) as well as of links (references) between the agents. As such,R-Charon is the first formal, hybrid automata based modeling language which also addresses dynamic reconfiguration. We develop and present the syntax and operational semantics for R-Charon on three levels: behavior (modes), structure (agents) and configuration (system).",
"Abstract The hybrid χ (Chi) formalism integrates concepts from dynamics and control theory with concepts from computer science, in particular from process algebra and hybrid automata. It integrates ease of modeling with a straightforward, structured operational semantics. Its ‘consistent equation semantics’ enforces state changes to be consistent with delay predicates, that combine the invariant and flow clauses of hybrid automata. Ease of modeling is ensured by means of the following concepts: (1) different classes of variables: discrete and continuous, of subclass jumping or non-jumping, and algebraic; (2) strong time determinism of alternative composition in combination with delayable guards; (3) integration of urgent and non-urgent actions; (4) differential algebraic equations as a process term as in mathematics; (5) steady-state initialization; and 6) several user-friendly syntactic extensions. Furthermore, the χ formalism incorporates several concepts for complex system specification: (1) process terms for scoping that integrate abstraction, local variables, local channels and local recursion definitions; (2) process definition and instantiation that enable process re-use, encapsulation, hierarchical and or modular composition of processes; and (3) different interaction mechanisms: handshake synchronization and synchronous communication that allow interaction between processes without sharing variables, and shared variables that enable modular composition of continuous-time or hybrid processes. The syntax and semantics are illustrated using several examples.",
"SHIFT is a programming language for the specification and simulation of dynamic networks of hybrid automata. Such systems consist of components which can be created, interconnected and destroyed as the system evolves. Components exhibit hybrid behavior, consisting of continuous-time phases separated by discrete-event transitions. Components may evolve independently, or they may interact through their inputs, outputs and exported events. The interaction network itself may evolve.",
"This is a tutorial paper on the tool Uppaal. Its goal is to be a short introduction on the flavor of timed automata implemented in the tool, to present its interface, and to explain how to use the tool. The contribution of the paper is to provide reference examples and modeling patterns.",
"Existing models focus on specific aspects of distributed automation systems, e.g. communication aspects (IEC 61158) or application aspects (IEC 61131–3). Systems like IEC 61499 are directly dedicated to distributed applications but leave essential details of communication integration open. For the design of distributed automation systems it is necessary to understand the behavior of the complete system. This contribution introduces a concise model of distributed automation systems integrating communication layer of the OSI reference model with the non-distributed model of IEC 61131–3.",
""
]
}
|
1403.2765
|
1949487232
|
This paper introduces a new metaobject, the generalizer, which complements the existing specializer metaobject. With the help of examples, we show that this metaobject allows for the efficient implementation of complex non-class-based dispatch within the framework of existing metaobject protocols. We present our modifications to the generic function invocation protocol from the Art of the Metaobject Protocol; in combination with previous work, this produces a fully-functional extension of the existing mechanism for method selection and combination, including support for method combination completely independent from method selection. We discuss our implementation, within the SBCL implementation of Common Lisp, and in that context compare the performance of the new protocol with the standard one, demonstrating that the new protocol can be tolerably efficient.
|
In some sense, all dispatch schemes are specializations of predicate dispatch @cite_12 . The main problem with predicate dispatch is its expressiveness: with arbitrary predicates able to control dispatch, it is essentially impossible to perform any substantial precomputation, or even to automatically determine an ordering of methods given a set of arguments. Even Clojure's restricted dispatch scheme provides an explicit operator for stating a preference order among methods, where here we provide an operator to order specializers; in filtered dispatch the programmer implicitly gives the system an order of precedence, through the lexical ordering of filter specification in a filtered function definition.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2782656162"
],
"abstract": [
"Predicate dispatching generalizes previous method dispatch mechanisms by permitting arbitrary predicates to control method applicability and by using logical implication between predicates as the overriding relationship. The method selected to handle a message send can depend not just on the classes of the arguments, as in ordinary object-oriented dispatch, but also on the classes of subcomponents, on an argument's state, and on relationships between objects. This simple mechanism subsumes and extends object-oriented single and multiple dispatch, ML-style pattern matching, predicate classes, and classifiers, which can all be regarded as syntactic sugar for predicate dispatching. This paper introduces predicate dispatching, gives motivating examples, and presents its static and dynamic semantics. An implementation of predicate dispatching is available."
]
}
|
1403.2765
|
1949487232
|
This paper introduces a new metaobject, the generalizer, which complements the existing specializer metaobject. With the help of examples, we show that this metaobject allows for the efficient implementation of complex non-class-based dispatch within the framework of existing metaobject protocols. We present our modifications to the generic function invocation protocol from the Art of the Metaobject Protocol; in combination with previous work, this produces a fully-functional extension of the existing mechanism for method selection and combination, including support for method combination completely independent from method selection. We discuss our implementation, within the SBCL implementation of Common Lisp, and in that context compare the performance of the new protocol with the standard one, demonstrating that the new protocol can be tolerably efficient.
|
The Slate programming environment combines prototype-oriented programming with multiple dispatch @cite_13 ; in that context, the analogue of an argument's class (in Common Lisp) as a representation of the equivalence class of objects with the same behaviour is the tuple of roles and delegations: objects with the same roles and delegations tuple behave the same, much as objects with the same generalizer have the same behaviour in the protocol described in this paper.
|
{
"cite_N": [
"@cite_13"
],
"mid": [
"1582123168"
],
"abstract": [
"Two object-oriented programming language paradigms—dynamic, prototype-based languages and multi-method languages—provide orthogonal benefits to software engineers. These two paradigms appear to be in conflict, however, preventing engineers from realizing the benefits of both technologies in one system. This paper introduces a novel object model, prototypes with multiple dispatch (PMD), which seamlessly unifies these two approaches. We give formal semantics for PMD, and discuss implementation and experience with PMD in the dynamically typed programming language Slate."
]
}
|
1403.2340
|
2952391100
|
We provide a general framework to construct finite dimensional approximations of the space of convex functions, which also applies to the space of c-convex functions and to the space of support functions of convex bodies. We give estimates of the distance between the approximation space and the admissible set. This framework applies to the approximation of convex functions by piecewise linear functions on a mesh of the domain and by other finite-dimensional spaces such as tensor-product splines. We show how these discretizations are well suited for the numerical solution of problems of calculus of variations under convexity constraints. Our implementation relies on proximal algorithms, and can be easily parallelized, thus making it applicable to large scale problems in dimension two and three. We illustrate the versatility and the efficiency of our approach on the numerical solution of three problems in calculus of variation : 3D denoising, the principal agent problem, and optimization within the class of convex bodies.
|
* Mesh versus grid constraints Carlier, Lachand-Robert and Maury proposed in @cite_22 to replace the space of P @math convex functions by the space of the space of . For every fixed mesh, a piecewise linear function is a convex interpolate if it is obtained by linearly interpolating the restriction of a convex function to the node of the mesh. Note that these functions are not necessarily convex, and the method is therefore not interior. Density results are straightforward in this context but the number of linear constraints which have to be imposed on nodes values is rather large. The authors observe that in the case of a regular grid, one needs @math constraints in order to describe the space of convex interpolates, where @math stands for the number of nodes of the mesh.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2047510318"
],
"abstract": [
"We describe an algorithm to approximate the minimizer of an elliptic functional in the form ( j(x, u, u) ) on the set ( C ) of convex functions u in an appropriate functional space X. Such problems arise for instance in mathematical economics [4]. A special case gives the convex envelope (u_0^ ** ) of a given function (u_0 ). Let ((T_n) ) be any quasiuniform sequence of meshes whose diameter goes to zero, and (I_n ) the corresponding affine interpolation operators. We prove that the minimizer over ( C ) is the limit of the sequence ((u_n) ), where (u_n ) minimizes the functional over (I_n( C ) ). We give an implementable characterization of (I_n( C ) ). Then the finite dimensional problem turns out to be a minimization problem with linear constraints."
]
}
|
1403.2340
|
2952391100
|
We provide a general framework to construct finite dimensional approximations of the space of convex functions, which also applies to the space of c-convex functions and to the space of support functions of convex bodies. We give estimates of the distance between the approximation space and the admissible set. This framework applies to the approximation of convex functions by piecewise linear functions on a mesh of the domain and by other finite-dimensional spaces such as tensor-product splines. We show how these discretizations are well suited for the numerical solution of problems of calculus of variations under convexity constraints. Our implementation relies on proximal algorithms, and can be easily parallelized, thus making it applicable to large scale problems in dimension two and three. We illustrate the versatility and the efficiency of our approach on the numerical solution of three problems in calculus of variation : 3D denoising, the principal agent problem, and optimization within the class of convex bodies.
|
Aguilera and Morin @cite_3 proposed a finite-difference approximation of the space of convex functions using discrete convex Hessians. They prove that it is possible to impose convexity by requiring a linear number of nonlinear constraints with respect to the number of nodes. The leading fully nonlinear optimization problems are solved using semidefinite programming codes. Whereas, convergence is proved in a rather general setting, the practical efficiency of this approach is limited by the capability of semidefinite solvers. In a similar spirit, Oberman @cite_10 considers the space of function that satisfy local convexity constraints on a finite set of directions. By changing the size of the stencil, the author proposed different discretizations which lead to exterior or interior approximations. Estimates of the quality of the approximation are given for smooth convex functions.
|
{
"cite_N": [
"@cite_10",
"@cite_3"
],
"mid": [
"2068491204",
"2016361304"
],
"abstract": [
"We consider the problem of approximating the solution of variational problems subject to the constraint that the admissible functions must be convex. This problem is at the interface between convex analysis, convex optimization, variational problems, and partial differential equation techniques. The approach is to approximate the (nonpolyhedral) cone of convex functions by a polyhedral cone which can be represented by linear inequalities. This approach leads to an optimization problem with linear constraints which can be computed efficiently, hundreds of times faster than existing methods.",
"Many problems of theoretical and practical interest involve finding an optimum over a family of convex functions. For instance, finding the projection on the convex functions in Hk(Ω), and optimizing functionals arising from some problems in economics. In the continuous setting and assuming smoothness, the convexity constraints may be given locally by asking the Hessian matrix to be positive semidefinite, but in making discrete approximations two difficulties arise: the continuous solutions may be not smooth, and functions with positive semidefinite discrete Hessian need not be convex in a discrete sense. Previous work has concentrated on non-local descriptions of convexity, making the number of constraints to grow super-linearly with the number of nodes even in dimension 2, and these descriptions are very difficult to extend to higher dimensions. In this paper we propose a finite difference approximation using positive semidefinite programs and discrete Hessians, and prove convergence under very general conditions, even when the continuous solution is not smooth, working on any dimension, and requiring a linear number of constraints in the number of nodes. Using semidefinite programming codes, we show concrete examples of approximations to problems in two and three dimensions."
]
}
|
1403.2340
|
2952391100
|
We provide a general framework to construct finite dimensional approximations of the space of convex functions, which also applies to the space of c-convex functions and to the space of support functions of convex bodies. We give estimates of the distance between the approximation space and the admissible set. This framework applies to the approximation of convex functions by piecewise linear functions on a mesh of the domain and by other finite-dimensional spaces such as tensor-product splines. We show how these discretizations are well suited for the numerical solution of problems of calculus of variations under convexity constraints. Our implementation relies on proximal algorithms, and can be easily parallelized, thus making it applicable to large scale problems in dimension two and three. We illustrate the versatility and the efficiency of our approach on the numerical solution of three problems in calculus of variation : 3D denoising, the principal agent problem, and optimization within the class of convex bodies.
|
Ekeland and Moreno-Bromberg @cite_21 proposed a dual approach for parameterizing the space of convex functions on a domain. Given a finite set of points @math in the domain, they parameterize convex functions by their value @math @math at those points. In order to ensure that these couples of values and gradients @math are induced by a convex function, they add for every pair of points in @math the constraints @math . This discretization is interior, and it is easy to show that the phenomenon of Choné and Le Meur does not occur for this type of approximation. However, the high number of constraints makes it difficult to solve large-scale problems using this approach. Mirebeau @cite_7 is currently investigating an adaptative version of this method that would allow its application to larger problems.
|
{
"cite_N": [
"@cite_21",
"@cite_7"
],
"mid": [
"1967110739",
"2951095155"
],
"abstract": [
"We present an algorithm to approximate the solutions to variational problems where set of admissible functions consists of convex functions. The main motivation behind the numerical method is to compute solutions to Adverse Selection problems within a Principal-Agent framework. Problems such as product lines design, optimal taxation, structured derivatives design, etc. can be studied through the scope of these models. We develop a method to estimate their optimal pricing schedules.",
"We address the discretization of optimization problems posed on the cone of convex functions, motivated in particular by the principal agent problem in economics, which models the impact of monopoly on product quality. Consider a two dimensional domain, sampled on a grid of N points. We show that the cone of restrictions to the grid of convex functions is in general characterized by N^2 linear inequalities; a direct computational use of this description therefore has a prohibitive complexity. We thus introduce a hierarchy of sub-cones of discrete convex functions, associated to stencils which can be adaptively, locally, and anisotropically refined. Numerical experiments optimize the accuracy complexity tradeoff through the use of a-posteriori stencil refinement strategies."
]
}
|
1403.2319
|
1554792604
|
We introduce an efficient combination of polyhedral analysis and predicate partitioning. Template polyhedral analysis abstracts numerical variables inside a program by one polyhedron per control location, with a priori fixed directions for the faces. The strongest inductive invariant in such an abstract domain may be computed by upward strategy iteration. If the transition relation includes disjunctions and existential quantifiers (a succinct representation for an exponential set of paths), this invariant can be computed by a combination of strategy iteration and satisfiability modulo theory (SMT) solving. Unfortunately, the above approaches lead to unacceptable space and time costs if applied to a program whose control states have been partitioned according to predicates. We therefore propose a modification of the strategy iteration algorithm where the strategies are stored succinctly, and the linear programs to be solved at each iteration step are simplified according to an equivalence relation. We have implemented the technique in a prototype tool and we demonstrate on a series of examples that the approach performs significantly better than previous strategy iteration techniques.
|
Early work in compilation and verification of reactive systems @cite_3 advocated quotienting the Boolean state space according to some form of bisimulation. In contrast, we compute coarser equivalences according to per-constraint semantics. In the industrial-strength analyzer analyzer , static heuristics determine reasonably small packs of related'' Booleans and numerical variables, such that the values of the numerical variables are analyzed separately for each Boolean valuation [ .2.4] BlanchetCousotEtAl_PLDI03 . In contrast, our equivalence classes are computed dynamically and per-constraint.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2016079448"
],
"abstract": [
"Abstract We address the problem of generating a minimal state graph from a program, without building the whole state graph. Minimality is considered here with respect to bisimulation. A generation algorithm is derived and illustrated. Applications concern program verification and control synthesis in reactive program compilation."
]
}
|
1403.2484
|
2030252673
|
This paper addresses the problem of transferring useful knowledge from a source network to predict node labels in a newly formed target network. While existing transfer learning research has primarily focused on vector-based data, in which the instances are assumed to be independent and identically distributed, how to effectively transfer knowledge across different information networks has not been well studied, mainly because networks may have their distinct node features and link relationships between nodes. In this paper, we propose a new transfer learning algorithm that attempts to transfer common latent structure features across the source and target networks. The proposed algorithm discovers these latent features by constructing label propagation matrices in the source and target networks, and mapping them into a shared latent feature space. The latent features capture common structure patterns shared by two networks, and serve as domain-independent features to be transferred between networks. Together with domain-dependent node features, we thereafter propose an iterative classification algorithm that leverages label correlations to predict node labels in the target network. Experiments on real-world networks demonstrate that our proposed algorithm can successfully achieve knowledge transfer between networks to help improve the accuracy of classifying nodes in the target network.
|
Collective classification has recently attracted significant attention for classifying relational data in the data mining area @cite_6 @cite_17 . Networked data is one typical type of relational data, in which instances are represented as nodes and the relationships between nodes are represented as edges. Collective classification exploits dependencies between instances, which makes it one of the most favorable classification methods for networked data.
|
{
"cite_N": [
"@cite_6",
"@cite_17"
],
"mid": [
"2153959628",
"2100045227"
],
"abstract": [
"Many real-world applications produce networked data such as the world-wide web (hypertext documents connected via hyperlinks), social networks (for example, people connected by friendship links), communication networks (computers connected via communication links) and biological networks (for example, protein interaction networks). A recent focus in machine learning research has been to extend traditional machine learning classification techniques to classify nodes in such networks. In this article, we provide a brief introduction to this area of research and how it has progressed during the past decade. We introduce four of the most widely used inference algorithms for classifying networked data and empirically compare them on both synthetic and real-world data.",
"Many collective classification (CC) algorithms have been shown to increase accuracy when instances are interrelated. However, CC algorithms must be carefully applied because their use of estimated labels can in some cases decrease accuracy. In this article, we show that managing this label uncertainty through cautious algorithmic behavior is essential to achieving maximal, robust performance. First, we describe cautious inference and explain how four well-known families of CC algorithms can be parameterized to use varying degrees of such caution. Second, we introduce cautious learning and show how it can be used to improve the performance of almost any CC algorithm, with or without cautious inference. We then evaluate cautious inference and learning for the four collective inference families, with three local classifiers and a range of both synthetic and real-world data. We find that cautious learning and cautious inference typically outperform less cautious approaches. In addition, we identify the data characteristics that predict more substantial performance differences. Our results reveal that the degree of caution used usually has a larger impact on performance than the choice of the underlying inference algorithm. Together, these results identify the most appropriate CC algorithms to use for particular task characteristics and explain multiple conflicting findings from prior CC research."
]
}
|
1403.2484
|
2030252673
|
This paper addresses the problem of transferring useful knowledge from a source network to predict node labels in a newly formed target network. While existing transfer learning research has primarily focused on vector-based data, in which the instances are assumed to be independent and identically distributed, how to effectively transfer knowledge across different information networks has not been well studied, mainly because networks may have their distinct node features and link relationships between nodes. In this paper, we propose a new transfer learning algorithm that attempts to transfer common latent structure features across the source and target networks. The proposed algorithm discovers these latent features by constructing label propagation matrices in the source and target networks, and mapping them into a shared latent feature space. The latent features capture common structure patterns shared by two networks, and serve as domain-independent features to be transferred between networks. Together with domain-dependent node features, we thereafter propose an iterative classification algorithm that leverages label correlations to predict node labels in the target network. Experiments on real-world networks demonstrate that our proposed algorithm can successfully achieve knowledge transfer between networks to help improve the accuracy of classifying nodes in the target network.
|
Iterative classification algorithm (ICA) is an iterative method that is widely applied and extended in many studies @cite_15 @cite_12 @cite_10 . The basic assumption of ICA is that, given the labels of a node's neighbors, the label of the node is independent of the features of its neighbors and non-neighbors, and the labels of all non-neighbors. In ICA, each node is expressed by combining the node features and relational features constructed by using the labels of all the neighbors of the node. The relational features can be computed by using an aggregation function over the neighbors, such as @math , @math , @math and so on. Based on the node features and relational features, ICA trains a classifier and iteratively updates the predictions of all nodes by using the predictions for node with unknown labels. This process continues until the algorithm converges. In this work, we adopt an ICA-like algorithm to perform collective classification with focuses on transferring structure knowledge from the source network to improve collective classification accuracy on the target network, under the assumption that the number of labeled nodes is very limited.
|
{
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_12"
],
"mid": [
"2121250409",
"2143387805",
"2133075925"
],
"abstract": [
"Relational data offer a unique opportunity for improving the c lassification accuracy o f statistical m odels. If two objects are related, inferring something about one object can aid inferences about the other. We present an iterative classification p rocedure that exploits this characteristic of relational data. This approach uses simple Bayesian classifiers in an iterative fashion, dynamically upd ating the attributes of some objects as inferences are made about related ob jects. Inferences made with h igh confidence in initial iterations are fed back into the data and are used to inform subsequent i nferences about related ob jects. We evaluate the performance of this approach on a binary classification task. Experiments indicate that it erative classification significantly increases accuracy when compared to a single-pass approach.",
"We introduce a novel active learning algorithm for classification of network data. In this setting, training instances are connected by a set of links to form a network, the labels of linked nodes are correlated, and the goal is to exploit these dependencies and accurately label the nodes. This problem arises in many domains, including social and biological network analysis and document classification, and there has been much recent interest in methods that collectively classify the nodes in the network. While in many cases labeled examples are expensive, often network information is available. We show how an active learning algorithm can take advantage of network structure. Our algorithm effectively exploits the links between instances and the interaction between the local and collective aspects of a classifier to improve the accuracy of learning from fewer labeled examples. We experiment with two real-world benchmark collective classification domains, and show that we are able to achieve extremely accurate results even when only a small fraction of the data is labeled.",
"The problems of object classification (labeling the nodes of a graph) and link prediction (predicting the links in a graph) have been largely studied independently. Commonly, object classification is performed assuming a complete set of known links and link prediction is done assuming a fully observed set of node attributes. In most real world domains, however, attributes and links are often missing or incorrect. Object classification is not provided with all the links relevant to correct classification and link prediction is not provided all the labels needed for accurate link prediction. In this paper, we propose an approach that addresses these two problems by interleaving object classification and link prediction in a collective algorithm. We investigate empirically the conditions under which an integrated approach to object classification and link prediction improves performance, and find that performance improves over a wide range of network types, and algorithm settings."
]
}
|
1403.2433
|
2166158995
|
Mixability of a loss is known to characterise when constant regret bounds are achievable in games of prediction with expert advice through the use of the aggregating algorithm [Vovk, 2001]. We provide a new interpretation of mixability via convex analysis that highlights the role of the Kullback-Leibler divergence in its definition. This naturally generalises to what we call �-mixability where the Bregman divergence D� replaces the KL divergence. We prove that losses that are �-mixable also enjoy constant regret bounds via a generalised aggregating algorithm that is similar to mirror descent.
|
The starting point for mixability and the aggregating algorithm is the work of . The general setting of prediction with expert advice is summarised in [Chapters 2 and 3] Cesa-Bianchi:2006 . There one can find a range of results that study diffferent aggregation schemes and different assumptions on the losses (exp-concave, mixable). Variants of the aggregating algorithm have been studied for classically mixable losses, with a tradeoff between tightness of the bound (in a constant factor) and the computational complexity . Weakly mixable losses are a generalisation of mixable losses. They have been studied in @cite_1 where it is shown there exists a variant of the aggregating algorithm that achives regret @math for some constant @math . [in .2] Vovk:2001 makes the observation that his Aggregating Algorithm reduces to Bayesian mixtures in the case of the log loss game. See also the discussion in [page 330] Cesa-Bianchi:2006 relating certain aggregation schemes to Bayesian updating.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2059590802"
],
"abstract": [
"This paper resolves the problem of predicting as well as the best expert up to an additive term of the order o(n), where n is the length of a sequence of letters from a finite alphabet. We call the games that permit this weakly mixable and give a geometrical characterisation of the class of weakly mixable games. Weak mixability turns out to be equivalent to convexity of the finite part of the set of superpredictions. For bounded games we introduce the Weak Aggregating Algorithm that allows us to obtain additive terms of the form Cn."
]
}
|
1403.2433
|
2166158995
|
Mixability of a loss is known to characterise when constant regret bounds are achievable in games of prediction with expert advice through the use of the aggregating algorithm [Vovk, 2001]. We provide a new interpretation of mixability via convex analysis that highlights the role of the Kullback-Leibler divergence in its definition. This naturally generalises to what we call �-mixability where the Bregman divergence D� replaces the KL divergence. We prove that losses that are �-mixable also enjoy constant regret bounds via a generalised aggregating algorithm that is similar to mirror descent.
|
The analysis of mirror descent by @cite_2 shows that it achieves constant regret when the entropic regulariser is used. However, they do not consider whether similar results extend to other entropies defined on the simplex.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2016384870"
],
"abstract": [
"The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem."
]
}
|
1403.1349
|
2133510715
|
Accurately segmenting a citation string into fields for authors, titles, etc. is a challenging task because the output typically obeys various global constraints. Previous work has shown that modeling soft constraints, where the model is encouraged, but not require to obey the constraints, can substantially improve segmentation performance. On the other hand, for imposing hard constraints, dual decomposition is a popular technique for efficient prediction given existing algorithms for unconstrained inference. We extend the technique to perform prediction subject to soft constraints. Moreover, with a technique for performing inference given soft constraints, it is easy to automatically generate large families of constraints and learn their costs with a simple convex optimization problem during training. This allows us to obtain substantial gains in accuracy on a new, challenging citation extraction dataset.
|
There are multiple previous examples of augmenting chain-structured sequence models with terms capturing global relationships by expanding the chain to a more complex graphical model with non-local dependencies between the outputs. Inference in these models can be performed, for example, with loopy belief propagation @cite_4 @cite_9 or Gibbs sampling @cite_7 . Belief propagation is prohibitively expensive in our model due to the high cardinalities of the output variables and of the global factors, which involve all output variables simultaneously. There are various methods for exploiting the combinatorial structure of these factors, but performance would still have higher complexity than our method. While Gibbs sampling has been shown to work well tasks such as named entity recognition @cite_7 , our previous experiments show that it does not work well for citation extraction, where it found only low-quality solutions in practice because the sampling did not mix well, even on a simple chain-structured CRF.
|
{
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_7"
],
"mid": [
"",
"2129712609",
"2096765155"
],
"abstract": [
"",
"Most information extraction (IE) systems treat separate potential extractions as independent. However, in many cases, considering influences between different potential extractions could improve overall accuracy. Statistical methods based on undirected graphical models, such as conditional random fields (CRFs), have been shown to be an effective approach to learning accurate IE systems. We present a new IE method that employs Relational Markov Networks (a generalization of CRFs), which can represent arbitrary dependencies between extractions. This allows for \"collective information extraction\" that exploits the mutual influence between possible extractions. Experiments on learning to extract protein names from biomedical text demonstrate the advantages of this approach.",
"Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9 over state-of-the-art systems on two established information extraction tasks."
]
}
|
1403.1349
|
2133510715
|
Accurately segmenting a citation string into fields for authors, titles, etc. is a challenging task because the output typically obeys various global constraints. Previous work has shown that modeling soft constraints, where the model is encouraged, but not require to obey the constraints, can substantially improve segmentation performance. On the other hand, for imposing hard constraints, dual decomposition is a popular technique for efficient prediction given existing algorithms for unconstrained inference. We extend the technique to perform prediction subject to soft constraints. Moreover, with a technique for performing inference given soft constraints, it is easy to automatically generate large families of constraints and learn their costs with a simple convex optimization problem during training. This allows us to obtain substantial gains in accuracy on a new, challenging citation extraction dataset.
|
Recently, dual decomposition has become a popular method for solving complex structured prediction problems in NLP @cite_10 @cite_17 @cite_3 @cite_16 @cite_8 . Soft constraints can be implemented inefficiently using hard constraints and dual decomposition--- by introducing copies of output variables and an auxiliary graphical model, as in . However, at every iteration of dual decomposition, MAP must be run in this auxiliary model. Furthermore the copying of variables doubles the number of iterations needed for information to flow between output variables, and thus slows convergence. On the other hand, our approach to soft constraints has identical per-iteration complexity as for hard constraints, and is a very easy modification to existing hard constraint code.
|
{
"cite_N": [
"@cite_8",
"@cite_3",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"1508186727",
"2132678463",
"2098967804",
"1903393809",
"1812474519"
],
"abstract": [
"Named entity recognition (NER) is the task of segmenting and classifying occurrences of names in text. In NER, local contextual cues provide important evidence, but non-local information from the whole document could also prove useful: for example, it is useful to know that “Mary Kay Inc.” has been mentioned in a document to classify subsequent mentions of “Mary Kay” as an organization and not as a person. Previous works for NER typically model the problem as a sequence labeling problem, coupling the predictions of neighboring words with a Markov model such as conditional random fields. We propose applying the dual decomposition approach to combine a local sentential model and a non-local label consistency model for NER. The dual decomposition approach is a fusion approach which combines two models by constraining them to agree on their predictions on the test data. Empirically, we show that this approach outperforms the local sentential models on four out of five data sets.",
"Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, describe formal guarantees for the method, and describe practical issues in implementing the algorithms. While our examples are predominantly drawn from the NLP literature, the material should be of general relevance to inference problems in machine learning. A central theme of this tutorial is that Lagrangian relaxation is naturally applied in conjunction with a broad class of combinatorial algorithms, allowing inference in models that go significantly beyond previous work on Lagrangian relaxation for inference in graphical models.",
"We propose an algorithm to find the best path through an intersection of arbitrarily many weighted automata, without actually performing the intersection. The algorithm is based on dual decomposition: the automata attempt to agree on a string by communicating about features of the string. We demonstrate the algorithm on the Steiner consensus string problem, both on synthetic data and on consensus decoding for speech recognition. This involves implicitly intersecting up to 100 automata.",
"This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for many languages, exact solutions are achieved on over 98 of test sentences. The accuracy of our models is higher than previous work on a broad range of datasets.",
"This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamic-programming algorithms as oracle solvers for sub-problems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram part-of-speech tagger."
]
}
|
1403.1349
|
2133510715
|
Accurately segmenting a citation string into fields for authors, titles, etc. is a challenging task because the output typically obeys various global constraints. Previous work has shown that modeling soft constraints, where the model is encouraged, but not require to obey the constraints, can substantially improve segmentation performance. On the other hand, for imposing hard constraints, dual decomposition is a popular technique for efficient prediction given existing algorithms for unconstrained inference. We extend the technique to perform prediction subject to soft constraints. Moreover, with a technique for performing inference given soft constraints, it is easy to automatically generate large families of constraints and learn their costs with a simple convex optimization problem during training. This allows us to obtain substantial gains in accuracy on a new, challenging citation extraction dataset.
|
Initial work in machine learning for citation extraction used Markov models with no global constraints. Hidden Markov models (HMMs), were originally employed for automatically extracting information from research papers on the CORA dataset @cite_20 @cite_12 . Later, CRFs were shown to perform better on CORA, improving the results from the Hmm's token-level F1 of 86.6 to 91.5 with a CRF @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_12",
"@cite_20"
],
"mid": [
"1534730506",
"1995577199",
"1568339100"
],
"abstract": [
"With the increasing use of research paper search engines, such as CiteSeer, for both literature search and hiring decisions, the accuracy of such systems is of paramount importance. This paper employs Conditional Random Fields (CRFs) for the task of extracting various common fields from the headers and citation of research papers. The basic theory of CRFs is becoming well-understood, but best-practices for applying them to real-world data requires additional exploration. This paper makes an empirical exploration of several factors, including variations on Gaussian, exponential and hyperbolic-L1 priors for improved regularization, and several classes of features and Markov order. On a standard benchmark data set, we achieve new state-of-the-art performance, reducing error in average F1 by 36 , and word error rate by 78 in comparison with the previous best SVM results. Accuracy compares even more favorably against HMMs.",
"This paper describes a simple method for extracting metadata fields from citations using hidden Markov models. The method is easy to implement and can achieve levels of precision and recall for heterogeneous citations comparable to or greater than other HMM-based methods. The method consists largely of string manipulation and otherwise depends only on an implementation of the Viterbi algorithm, which is widely available, and so can be implemented by diverse digital library systems.",
"Statistical machine learning techniques, while well proven in fields such as speech recognition, are just beginning to be applied to the information extraction domain. We explore the use of hidden Markov models for information extraction tasks, specifically focusing on how to learn model structure from data and how to make the best use of labeled and unlabeled data. We show that a manually-constructed model that contains multiple states per extraction field outperforms a model with one state per field, and discuss strategies for learning the model structure automatically from data. We also demonstrate that the use of distantly-labeled data to set model parameters provides a significant improvement in extraction accuracy. Our models are applied to the task of extracting important fields from the headers of computer science research papers, and achieve an extraction accuracy of 92.9 ."
]
}
|
1403.1349
|
2133510715
|
Accurately segmenting a citation string into fields for authors, titles, etc. is a challenging task because the output typically obeys various global constraints. Previous work has shown that modeling soft constraints, where the model is encouraged, but not require to obey the constraints, can substantially improve segmentation performance. On the other hand, for imposing hard constraints, dual decomposition is a popular technique for efficient prediction given existing algorithms for unconstrained inference. We extend the technique to perform prediction subject to soft constraints. Moreover, with a technique for performing inference given soft constraints, it is easy to automatically generate large families of constraints and learn their costs with a simple convex optimization problem during training. This allows us to obtain substantial gains in accuracy on a new, challenging citation extraction dataset.
|
Recent work on globally-constrained inference in citation extraction used an HMM @math , which is an HMM with the addition of global features that are restricted to have positive weights @cite_6 . Approximate inference is performed using beam search. This method increased the HMM token-level accuracy from 86.69 to 93.92 on a test set of 100 citations from the CORA dataset. The global constraints added into the model are simply that each label only occurs once per citation. This approach is limited in its use of an HMM as an underlying model, as it has been shown that CRFs perform significantly better, achieving 95.37 token-level accuracy on CORA @cite_14 . In our experiments, we demonstrate that the specific global constraints used by help on the UMass dataset as well.
|
{
"cite_N": [
"@cite_14",
"@cite_6"
],
"mid": [
"1534730506",
"2119295783"
],
"abstract": [
"With the increasing use of research paper search engines, such as CiteSeer, for both literature search and hiring decisions, the accuracy of such systems is of paramount importance. This paper employs Conditional Random Fields (CRFs) for the task of extracting various common fields from the headers and citation of research papers. The basic theory of CRFs is becoming well-understood, but best-practices for applying them to real-world data requires additional exploration. This paper makes an empirical exploration of several factors, including variations on Gaussian, exponential and hyperbolic-L1 priors for improved regularization, and several classes of features and Markov order. On a standard benchmark data set, we achieve new state-of-the-art performance, reducing error in average F1 by 36 , and word error rate by 78 in comparison with the previous best SVM results. Accuracy compares even more favorably against HMMs.",
"Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where an expressive dependency structure among these can influence, or even dictate, what assignments are possible. Commonly used models typically ignore expressive dependencies since the traditional way of incorporating non-local dependencies is inefficient and hence leads to expensive training and inference. The contribution of this paper is two-fold. First, this paper presents Constrained Conditional Models (CCMs), a framework that augments linear models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. The paper develops, analyzes and compares novel algorithms for CCMs based on Hidden Markov Models and Structured Perceptron. The proposed CCM framework is also compared to task-tailored models, such as semi-CRFs. Second, we propose CoDL, a constraint-driven learning algorithm, which makes use of constraints to guide semi-supervised learning. We provide theoretical justification for CoDL along with empirical results which show the advantage of using declarative constraints in the context of semi-supervised training of probabilistic models."
]
}
|
1403.1631
|
2950774332
|
Recent works have shown promise in using microarchitectural execution patterns to detect malware programs. These detectors belong to a class of detectors known as signature-based detectors as they catch malware by comparing a program's execution pattern (signature) to execution patterns of known malware programs. In this work, we propose a new class of detectors - anomaly-based hardware malware detectors - that do not require signatures for malware detection, and thus can catch a wider range of malware including potentially novel ones. We use unsupervised machine learning to build profiles of normal program execution based on data from performance counters, and use these profiles to detect significant deviations in program behavior that occur as a result of malware exploitation. We show that real-world exploitation of popular programs such as IE and Adobe PDF Reader on a Windows x86 platform can be detected with nearly perfect certainty. We also examine the limits and challenges in implementing this approach in face of a sophisticated adversary attempting to evade anomaly-based detection. The proposed detector is complementary to previously proposed signature-based detectors and can be used together to improve security.
|
Besides the HPCs, several works have leveraged other hardware facilities on modern processors to monitor branch addresses efficiently to thwart classes of exploitation techniques. kBouncer uses the Last Branch Recording (LBR) facility to monitor for runtime behavior of indirect branch instructions during the invocation of Windows API for the prevention of ROP exploits @cite_23 . To enforce control flow integrity, CFIMon @cite_5 and Eunomia @cite_2 leverage the Branch Trace Store (BTS) to obtain branch source and target addresses to check for unseen pairs from a pre-identified database of legitimate branch pairs. Unlike our approach to detecting malware, these works are designed to prevent exploitation in the first place, and are orthorgonal to our anomaly detection approach.
|
{
"cite_N": [
"@cite_5",
"@cite_23",
"@cite_2"
],
"mid": [
"2171929398",
"70478248",
"2087300543"
],
"abstract": [
"Many classic and emerging security attacks usually introduce illegal control flow to victim programs. This paper proposes an approach to detecting violation of control flow integrity based on hardware support for performance monitoring in modern processors. The key observation is that the abnormal control flow in security breaches can be precisely captured by performance monitoring units. Based on this observation, we design and implement a system called CFIMon, which is the first non-intrusive system that can detect and reason about a variety of attacks violating control flow integrity without any changes to applications (either source or binary code) or requiring special-purpose hardware. CFIMon combines static analysis and runtime training to collect legal control flow transfers, and leverages the branch tracing store mechanism in commodity processors to collect and analyze runtime traces on-the-fly to detect violation of control flow integrity. Security evaluation shows that CFIMon has low false positives or false negatives when detecting several realistic security attacks. Performance results show that CFIMon incurs only 6.1 performance overhead on average for a set of typical server applications.",
"Return-oriented programming (ROP) has become the primary exploitation technique for system compromise in the presence of non-executable page protections. ROP exploits are facilitated mainly by the lack of complete address space randomization coverage or the presence of memory disclosure vulnerabilities, necessitating additional ROP-specific mitigations. In this paper we present a practical runtime ROP exploit prevention technique for the protection of third-party applications. Our approach is based on the detection of abnormal control transfers that take place during ROP code execution. This is achieved using hardware features of commodity processors, which incur negligible runtime overhead and allow for completely transparent operation without requiring any modifications to the protected applications. Our implementation for Windows 7, named kBouncer, can be selectively enabled for installed programs in the same fashion as user-friendly mitigation toolkits like Microsoft's EMET. The results of our evaluation demonstrate that kBouncer has low runtime overhead of up to 4 , when stressed with specially crafted workloads that continuously trigger its core detection component, while it has negligible overhead for actual user applications. In our experiments with in-the-wild ROP exploits, kBouncer successfully protected all tested applications, including Internet Explorer, Adobe Flash Player, and Adobe Reader.",
"This paper considers and validates the applicability of leveraging pervasively-available performance counters for detecting and reasoning about security breaches. Our key observation is that many security breaches, which typically cause abnormal control flow, usually incur precisely identifiable deviation in performance samples captured by processors. Based on this observation, we implement a prototype system called Eunomia, which is the first non-intrusive system that can detect emerging attacks based on return-oriented programming without any changes to applications (either source or binary code) or special-purpose hardware. Our security evaluation shows that Eunomia can detect some realistic attacks including code-injection attacks, return-to-libc attacks and return-oriented programming attacks on unmodified binaries with relatively low overhead."
]
}
|
1403.1364
|
2950957982
|
In this paper we study the structure of suffix trees. Given an unlabeled tree @math on @math nodes and suffix links of its internal nodes, we ask the question "Is @math a suffix tree?", i.e., is there a string @math whose suffix tree has the same topological structure as @math ? We place no restrictions on @math , in particular we do not require that @math ends with a unique symbol. This corresponds to considering the more general definition of implicit or extended suffix trees. Such general suffix trees have many applications and are for example needed to allow efficient updates when suffix trees are built online. We prove that @math is a suffix tree if and only if it is realized by a string @math of length @math , and we give a linear-time algorithm for inferring @math when the first letter on each edge is known. This generalizes the work of [Discrete Appl. Math. 163, 2014].
|
The problem of revealing structural properties and exploiting them to recover a string realizing a data structure has received a lot of attention in the literature. Besides @math $-suffix trees, the problem has been considered for border arrays @cite_19 @cite_12 , parameterized border arrays @cite_8 @cite_0 @cite_9 , suffix arrays @cite_13 @cite_14 @cite_4 , KMP failure tables @cite_2 @cite_16 , prefix tables @cite_17 , cover arrays @cite_1 , directed acyclic word graphs @cite_13 , and directed acyclic subsequence graphs @cite_13 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"",
"1589909894",
"2036743346",
"1834231399",
"1587287256",
"160379369",
"2162463605",
"1999151208",
"1874135322",
"56155448",
""
],
"abstract": [
"",
"",
"The parameterized pattern matching problem is a kind of pattern matching problem, where a pattern is considered to occur in a text when there exists a renaming bijection on the alphabet with which the pattern can be transformed into a substring of the text. A parameterized border array (p-border array ) is an analogue of a border array of a standard string, which is also known as the failure function of the Morris-Pratt pattern matching algorithm. In this paper we present a linear time algorithm to verify if a given integer array is a valid p-border array for a binary alphabet. We also show a linear time algorithm to compute all binary parameterized strings sharing a given p-border array. In addition, we give an algorithm which computes all p-border arrays of length at most n , where n is a a given threshold. This algorithm runs in time linear in the number of output p-border arrays.",
"The parameterized pattern matching problem is to check if there exists a renaming bijection on the alphabet with which a given pattern can be transformed into a substring of a given text. A parameterized border array (p-border array) is a parameterized version of a standard border array, and we can efficiently solve the parameterized pattern matching problem using p-border arrays. In this paper, we present a linear time algorithm to verify if a given integer array is a valid p-border array for a binary alphabet. We also show a linear time algorithm to compute all binary parameterized strings sharing a given p-border array. In addition, we give an algorithm which computes all p-border arrays of length at most n, where n is a given threshold. This algorithm runs in O(B\"2^n) time, where B\"2^n is the number of all p-border arrays of length n for a binary parameter alphabet. The problems with a larger alphabet are much more difficult. Still, we present an O(n^1^.^5)-time O(n)-space algorithm to verify if a given integer array of length n is a valid p-border array for an unbounded alphabet. The best previously known solution to this task takes time proportional to the n-th Bell number [email protected]?\"k\"=\"0^ k^nk!, and hence our algorithm is much more efficient. Also, we show that it is possible to enumerate all p-border arrays of length at most n for an unbounded alphabet in O(B^nn^2^.^5) time, where B^n denotes the number of p-border arrays of length n.",
"A proper factor u of a string y is a cover of y if every letter of y is within some occurrence of u in y. The concept generalises the notion of periods of a string. An integer array C is the minimal-cover (resp. maximal-cover) array of y if C[i] is the minimal (resp. maximal) length of covers of y[0 . . i], or zero if no cover exists. In this paper, we present a constructive algorithm checking the validity of an array as a minimal-cover or maximal-cover array of some string. When the array is valid, the algorithm produces a string over an unbounded alphabet whose cover array is the input array. All algorithms run in linear time due to an interesting combinatorial property of cover arrays: the sum of important values in a cover array is bounded by twice the length of the string.",
"The parameterized pattern matching problem is to check if there exists a renaming bijection on the alphabet with which a given pattern can be transformed into a substring of a given text. A parameterized border array (p-border array) is a parameterized version of a standard border array, and we can efficiently solve the parameterized pattern matching problem using p-border arrays. In this paper we present an O(n1.5)-time O(n)-space algorithm to verify if a given integer array of length n is a valid p-border array for an unbounded alphabet. The best previously known solution takes time proportional to the n-th Bell number 1 eΣk=0∞kn k!, and hence our algorithm is quite efficient.",
"A border of a string x is a proper (but possibly empty) prefix of x that is also a suffix of x. The border array β = β[1..n] of a string x = x[1..n] is an array of nonnegative integers in which each element β[i], 1 ≤ i ≤ n, is the length of the longest border of x[1..i]. In this paper we first present a simple linear-time algorithm to determine whether or not a given array y = y[1..n] of integers is a border array of some string on an alphabet of unbounded size. We state as an open problem the design of a corresponding and equally efficient algorithm on an alphabet of bounded size α. We then consider the problem of generating all possible distinct border arrays of given length n on a bounded or unbounded alphabet, and doing so in time proportional to the number of arrays generated. A previously published algorithm that claims to solve this problem in constant time per array generated is shown to be incorrect, and new algorithms are proposed. We state as open the design of an equally efficient on-line algorithm for this problem.",
"We present an on-line linear time and space algorithm to check if an integer array f is the border array of at least one string w built on a bounded or unbounded size alphabet Σ . First of all, we show a bijection between the border array of a string w and the skeleton of the DFA recognizing Σ*ω, called a string matching automaton (SMA). Different strings can have the same border array but the originality of the presented method is that the correspondence between a border array and a skeleton of SMA is independent from the underlying strings. This enables to design algorithms for validating and generating border arrays that outperform existing ones. The validating algorithm lowers the delay (maximal number of comparisons on one element of the array) from O(|w|) to 1 + min |Σ|,1 + log 2 |ω| compared to existing algorithms. We then give results on the numbers of distinct border arrays depending on the alphabet size. We also present an algorithm that checks if a given directed unlabeled graph G is the skeleton of a SMA on an alphabet of size s in linear time. Along the process the algorithm can build one string w for which G is the SMA skeleton.",
"Let @math denote the failure function of the Knuth-Morris-Pratt algorithm for a word w. In this paper we study the following problem: given an integer array @math , is there a word w over an arbitrary alphabet Σ such that @math for all i? Moreover, what is the minimum cardinality of Σ required? We give an elementary and self-contained @math time algorithm for this problem, thus improving the previously known solution ( in Conference in honor of Donald E. Knuth, 2007), which had no polynomial time bound. Using both deeper combinatorial insight into the structure of ?? and advanced algorithmic tools, we further improve the running time to @math .",
"This paper introduces a new problem of inferring strings from graphs, and inferring strings from arrays. Given a graph G or an array A, we infer a string that suits the graph, or the array, under some condition. Firstly, we solve the problem of finding a string w such that the directed acyclic subsequence graph (DASG )o fw is isomorphic to a given graph G. Secondly, we consider directed acyclic word graphs (DAWGs) in terms of string inference. Finally, we consider the problem of finding a string w of a minimal size alphabet, such that the suffix array (SA )o f w is identical to a given permutation p = p1 ,...,p n of integers 1 ,...,n . Each of our three algorithms solving the above problems runs in linear time with respect to the input size.",
"In this article we present an on-line linear time algorithm, to check if an integer array f is a border array of some string x built on a bounded size alphabet, which is simpler than the one given in [2]. Furthermore if f is a border array we are able to build, on-line and in linear time, a string x on a minimal size alphabet for which f is the border array. The reader can refer to the URL http: al.jalix.org Baba Applet to run the algorithm on his own examples.",
""
]
}
|
1403.1180
|
1764094450
|
Digital repositories, either digital preservation systems or archival systems, periodically check the integrity of stored objects to assure users of their correctness. To do so, prior solutions calculate integrity metadata and require the repository to store it alongside the actual data objects. This integrity metadata is essential for regularly verifying the correctness of the stored data objects. To safeguard and detect damage to this metadata, prior solutions rely on widely visible media, that is unaffiliated third parties, to store and provide back digests of the metadata to verify it is intact. However, they do not address recovery of the integrity metadata in case of damage or attack by an adversary. In essence, they do not preserve this metadata. We introduce IntegrityCatalog, a system that collects all integrity related metadata in a single component, and treats them as first class objects, managing both their integrity and their preservation. We introduce a treap-based persistent authenticated dictionary managing arbitrary length key value pairs, which we use to store all integrity metadata, accessible simply by object name. Additionally, IntegrityCatalog is a distributed system that includes a network protocol that manages both corruption detection and preservation of this metadata, using administrator-selected network peers with two possible roles. Verifiers store and offer attestations on digests and have minimal storage requirements, while preservers efficiently synchronize a complete copy of the catalog to assist in recovery in case of a detected catalog compromise on the local system. We describe our prototype implementation of IntegrityCatalog, measure its performance empirically, and demonstrate its effectiveness in real-world situations, with worst measured throughput of approximately 1K insertions per second, and 2K verified search operations per second.
|
Another approach is to calculate a checksum for an object and store it along with it. To achieve this, one can use error detecting techniques @cite_33 , such as cyclic redundancy checks @cite_15 , such as the widely used CRC32 @cite_7 . Although they may be attractive for messages on communication channels and very fast to compute, they are not appropriate for long term storage, as they do not provide strong pre-image resistance (an attacker can fairly easily calculate a second message with the same CRC as an existing one).
|
{
"cite_N": [
"@cite_15",
"@cite_33",
"@cite_7"
],
"mid": [
"1538576895",
"1980073965",
"1941880258"
],
"abstract": [
"",
"The author was led to the study given in this paper from a consideration of large scale computing machines in which a large number of operations must be performed without a single error in the end result. This problem of “doing things right” on a large scale is not essentially new; in a telephone central office, for example, a very large number of operations are performed while the errors leading to wrong numbers are kept well under control, though they have not been completely eliminated. This has been achieved, in part, through the use of self-checking circuits. The occasional failure that escapes routine checking is still detected by the customer and will, if it persists, result in customer complaint, while if it is transient it will produce only occasional wrong numbers. At the same time the rest of the central office functions satisfactorily. In a digital computer, on the other hand, a single failure usually means the complete failure, in the sense that if it is detected no more computing can be done until the failure is located and corrected, while if it escapes detection then it invalidates all subsequent operations of the machine. Put in other words, in a telephone central office there are a number of parallel paths which are more or less independent of each other; in a digital machine there is usually a single long path which passes through the same piece of equipment many, many times before the answer is obtained.",
"Standardized 32-bit cyclic redundancy codes provide fewer bits of guaranteed error detection than they could, achieving a Hamming Distance (HD) of only 4 for maximum-length Ethernet messages, whereas HD=6 is possible. Although research has revealed improved codes, exploring the entire design space has previously been computationally intractable, even for special-purpose hardware. Moreover, no CRC polynomial has yet been found that satisfies an emerging need to attain both HD=6 for 12K bit messages and HD=4 for message lengths beyond 64 Kbits. This paper presents results from the first exhaustive search of the 32-bit CRC design space. Results from previous research are validated and extended to include identifying all polynomials achieving a better HD than the IEEE 802.3 CRC-32 polynomial. A new class of polynomials is identified that provides HD=6 up to nearly 16K bit and HD=4 up to 114K bit message lengths, providing the best achievable design point that maximizes error detection for both legacy and new applications, including potentially iSCSI and application-implemented error checks."
]
}
|
1403.1180
|
1764094450
|
Digital repositories, either digital preservation systems or archival systems, periodically check the integrity of stored objects to assure users of their correctness. To do so, prior solutions calculate integrity metadata and require the repository to store it alongside the actual data objects. This integrity metadata is essential for regularly verifying the correctness of the stored data objects. To safeguard and detect damage to this metadata, prior solutions rely on widely visible media, that is unaffiliated third parties, to store and provide back digests of the metadata to verify it is intact. However, they do not address recovery of the integrity metadata in case of damage or attack by an adversary. In essence, they do not preserve this metadata. We introduce IntegrityCatalog, a system that collects all integrity related metadata in a single component, and treats them as first class objects, managing both their integrity and their preservation. We introduce a treap-based persistent authenticated dictionary managing arbitrary length key value pairs, which we use to store all integrity metadata, accessible simply by object name. Additionally, IntegrityCatalog is a distributed system that includes a network protocol that manages both corruption detection and preservation of this metadata, using administrator-selected network peers with two possible roles. Verifiers store and offer attestations on digests and have minimal storage requirements, while preservers efficiently synchronize a complete copy of the catalog to assist in recovery in case of a detected catalog compromise on the local system. We describe our prototype implementation of IntegrityCatalog, measure its performance empirically, and demonstrate its effectiveness in real-world situations, with worst measured throughput of approximately 1K insertions per second, and 2K verified search operations per second.
|
Another alternative for a summary'' function are algebraic formulas that produce signatures @cite_39 @cite_34 @cite_40 . @cite_11 proposed their use for proof of remote data possession when the owner no longer holds the original data. Although potentially faster to calculate than cryptographic hash functions, they also lack the pre-image resistance properties required to be applicable to the long-term digital preservation and archival systems we target.
|
{
"cite_N": [
"@cite_40",
"@cite_34",
"@cite_11",
"@cite_39"
],
"mid": [
"1972418517",
"80254674",
"2107511818",
"2057439468"
],
"abstract": [
"We present randomized algorithms to solve the following string-matching problem and some of its generalizations: Given a string X of length n (the pattern) and a string Y (the text), find the first occurrence of X as a consecutive block within Y. The algorithms represent strings of length n by much shorter strings called fingerprints, and achieve their efficiency by manipulating fingerprints instead of longer strings. The algorithms require a constant number of storage locations, and essentially run in real time. They are conceptually simple and easy to implement. The method readily generalizes to higher-dimensional patternmatching problems.",
"",
"The emerging use of the Internet for remote storage and backup has led to the problem of verifying that storage sites in a distributed system indeed store the data; this must often be done in the absence of knowledge of what the data should be. We use m n erasure-correcting coding to safeguard the stored data and use algebraic signatures hash functions with algebraic properties for verification. Our scheme primarily utilizes one such algebraic property: taking a signature of parity gives the same result as taking the parity of the signatures. To make our scheme collusionresistant, we blind data and parity by XORing them with a pseudo-random stream. Our scheme has three advantages over existing techniques. First, it uses only small messages for verification, an attractive property in a P2P setting where the storing peers often only have a small upstream pipe. Second, it allows verification of challenges across random data without the need for the challenger to compare against the original data. Third, it is highly resistant to coordinated attempts to undetectably modify data. These signature techniques are very fast, running at tens to hundreds of megabytes per second. Because of these properties, the use of algebraic signatures will permit the construction of large-scale distributed storage systems in which large amounts of storage can be verified with minimal network bandwidth.",
"In a paper in the November 1970 Communications of the ACM, V.Y. Lum introduced a technique of file indexing named combined indices. This technique permitted decreased retrieval time at the cost of increased storage space. This paper examines combined indices under conditions of file usage with different fractions of retrieval and update. Tradeoff curves are developed to show minimal cost of file usage by grouping various partially combined indices."
]
}
|
1403.1180
|
1764094450
|
Digital repositories, either digital preservation systems or archival systems, periodically check the integrity of stored objects to assure users of their correctness. To do so, prior solutions calculate integrity metadata and require the repository to store it alongside the actual data objects. This integrity metadata is essential for regularly verifying the correctness of the stored data objects. To safeguard and detect damage to this metadata, prior solutions rely on widely visible media, that is unaffiliated third parties, to store and provide back digests of the metadata to verify it is intact. However, they do not address recovery of the integrity metadata in case of damage or attack by an adversary. In essence, they do not preserve this metadata. We introduce IntegrityCatalog, a system that collects all integrity related metadata in a single component, and treats them as first class objects, managing both their integrity and their preservation. We introduce a treap-based persistent authenticated dictionary managing arbitrary length key value pairs, which we use to store all integrity metadata, accessible simply by object name. Additionally, IntegrityCatalog is a distributed system that includes a network protocol that manages both corruption detection and preservation of this metadata, using administrator-selected network peers with two possible roles. Verifiers store and offer attestations on digests and have minimal storage requirements, while preservers efficiently synchronize a complete copy of the catalog to assist in recovery in case of a detected catalog compromise on the local system. We describe our prototype implementation of IntegrityCatalog, measure its performance empirically, and demonstrate its effectiveness in real-world situations, with worst measured throughput of approximately 1K insertions per second, and 2K verified search operations per second.
|
A different approach is demonstrated by the LOCKSS @cite_32 peer-to-peer digital preservation system, which assumes multiple nodes store the same object and uses them for verification (and also for damage repair). This is a valid assumption for this system as its purpose is to preserve academic journals, that is, widely visible data. Each node verifies the integrity of its stored data objects by initiating a poll per object in which other nodes participate. The participating nodes need to read and process the complete object to produce a digest for it. This results in heavy load for the system as a whole; our approach can complement the existing polling protocol by allowing each node to verify its own content. This can reduce the polling rate per object and thus reduce overall system load.
|
{
"cite_N": [
"@cite_32"
],
"mid": [
"2950945875"
],
"abstract": [
"The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in opinion polls.'' Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection in new ways that ensure even an adversary capable of unlimited effort over decades has only a small probability of causing irrecoverable damage before being detected."
]
}
|
1403.1180
|
1764094450
|
Digital repositories, either digital preservation systems or archival systems, periodically check the integrity of stored objects to assure users of their correctness. To do so, prior solutions calculate integrity metadata and require the repository to store it alongside the actual data objects. This integrity metadata is essential for regularly verifying the correctness of the stored data objects. To safeguard and detect damage to this metadata, prior solutions rely on widely visible media, that is unaffiliated third parties, to store and provide back digests of the metadata to verify it is intact. However, they do not address recovery of the integrity metadata in case of damage or attack by an adversary. In essence, they do not preserve this metadata. We introduce IntegrityCatalog, a system that collects all integrity related metadata in a single component, and treats them as first class objects, managing both their integrity and their preservation. We introduce a treap-based persistent authenticated dictionary managing arbitrary length key value pairs, which we use to store all integrity metadata, accessible simply by object name. Additionally, IntegrityCatalog is a distributed system that includes a network protocol that manages both corruption detection and preservation of this metadata, using administrator-selected network peers with two possible roles. Verifiers store and offer attestations on digests and have minimal storage requirements, while preservers efficiently synchronize a complete copy of the catalog to assist in recovery in case of a detected catalog compromise on the local system. We describe our prototype implementation of IntegrityCatalog, measure its performance empirically, and demonstrate its effectiveness in real-world situations, with worst measured throughput of approximately 1K insertions per second, and 2K verified search operations per second.
|
Persistence was studied thoroughly by in @cite_22 . used the node copying technique to introduce a persistent authenticated search tree in @cite_44 , using a multi-key tree (B- Tree), which can be converted to a dictionary with some additional work. However, space requirements quickly become prohibitive as any change in a snapshot requires complete B- Tree page copies. introduced the Persistent Authenticated Dictionary (PAD) concept in @cite_26 , using a persistent Red-Black tree and node copying. More recently, in @cite_24 @cite_17 Crosby and Wallach studied the subject of PADs and suggested different techniques to implement them, including the treap @cite_14 with its set-unique property, as well a discussion about authenticator caching, which influenced our design decisions. However, this line of work never studied the impact of secondary memory on these data structures as all experiments were done in main memory.
|
{
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_22",
"@cite_44",
"@cite_24",
"@cite_17"
],
"mid": [
"1551041907",
"2099844038",
"2142947709",
"1532829354",
"1579315492",
"1517550320"
],
"abstract": [
"We introduce the notion of persistent authenticated dictionaries, that is, dictionaries where the user can make queries of the type \"was element e in set S at time t?\" and get authenticated answers. Applications include credential and certificate validation checking in the past (as in digital signatures for electronic contracts), digital receipts, and electronic tickets. We present two data structures that can efficiently support an infrastructure for persistent authenticated dictionaries, and we compare their performance.",
"",
"Abstract This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version available for use. In contrast, a persistent structure allows access to any version, old or new, at any time. We develop simple, systematic, and efficient techniques for making linked data structures persistent. We use our techniques to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O (1) space bounds for insertion and deletion.",
"A secure timeline is a tamper-evident historic record of the states through which a system goes throughout its operational history. Secure timelines can help us reason about the temporal ordering of system states in a provable manner. We extend secure timelines to encompass multiple, mutually distrustful services, using timeline entanglement. Timeline entanglement associates disparate timelines maintained at independent systems, by linking undeniably the past of one timeline to the future of another. Timeline entanglement is a sound method to map a time step in the history of one service onto the timeline of another, and helps clients of entangled services to get persistent temporal proofs for services rendered that survive the demise or noncooperation of the originating service. In this paper we present the design and implementation of Timeweave, our service development framework for timeline entanglement based on two novel disk-based authenticated data structures. We evaluate Timeweave’s performance characteristics and show that it can be ecien tly deployed in a loosely-coupled distributed system of several hundred nodes with overhead of roughly 2-8 of the processing resources of a PC-grade system. 1",
"Many real-world applications run on untrusted servers or are run on servers that are subject to strong insider attacks. Although we cannot prevent an untrusted server from modifying or deleting data, with tamper-evident data structures, we can discover when this has occurred. If an untrusted server knows that a particular reply will not be checked for correctness, it is free to lie. Auditing for correctness is thus a frequent but overlooked operation. In my thesis, I present and evaluate new efficient data structures for tamper-evident logging and tamper-evident storage of changing data on untrusted servers, focussing on the costs of the entire system. The first data structure is a new tamper-evident log design. I propose new semantics of tamper-evident logs in terms of the auditing process, required to detect misbehavior. To accomplish efficient auditing, I describe and benchmark a new tree-based data structure that can generate such proofs with logarithmic size and space, significantly improving over previous linear constructions while also offering a flexible query mechanism with authenticated results. The remaining data structures are designs for a persistent authenticated dictionary (PAD) that allows users to send lookup requests to an untrusted server and get authenticated answers, signed by a trusted author, for both the current and historical versions of the dataset. Improving on prior constructions that require logarithmic storage and time, I present new classes of efficient PAD algorithms offering constant-sized authenticated answers or constant storage per update. I implement 21 different versions of PAD algorithms and perform a comprehensive evaluation using contemporary cloud-computing prices for computing and bandwidth to determine the most monetarily cost-effective designs.",
"Authenticated dictionaries allow users to send lookup requests to an untrusted server and get authenticated answers. Persistent authenticated dictionaries (PADs) add queries against historical versions. We consider a variety of different trust models for PADs and we present several extensions, including support for aggregation and a rich query language, as well as hiding information about the order in which PADs were constructed. We consider variations on treelike data structures as well as a design that improves efficiency by speculative future predictions. We improve on prior constructions and feature two designs that can authenticate historical queries with constant storage per update and several designs that can return constant-sized authentication results."
]
}
|
1403.1180
|
1764094450
|
Digital repositories, either digital preservation systems or archival systems, periodically check the integrity of stored objects to assure users of their correctness. To do so, prior solutions calculate integrity metadata and require the repository to store it alongside the actual data objects. This integrity metadata is essential for regularly verifying the correctness of the stored data objects. To safeguard and detect damage to this metadata, prior solutions rely on widely visible media, that is unaffiliated third parties, to store and provide back digests of the metadata to verify it is intact. However, they do not address recovery of the integrity metadata in case of damage or attack by an adversary. In essence, they do not preserve this metadata. We introduce IntegrityCatalog, a system that collects all integrity related metadata in a single component, and treats them as first class objects, managing both their integrity and their preservation. We introduce a treap-based persistent authenticated dictionary managing arbitrary length key value pairs, which we use to store all integrity metadata, accessible simply by object name. Additionally, IntegrityCatalog is a distributed system that includes a network protocol that manages both corruption detection and preservation of this metadata, using administrator-selected network peers with two possible roles. Verifiers store and offer attestations on digests and have minimal storage requirements, while preservers efficiently synchronize a complete copy of the catalog to assist in recovery in case of a detected catalog compromise on the local system. We describe our prototype implementation of IntegrityCatalog, measure its performance empirically, and demonstrate its effectiveness in real-world situations, with worst measured throughput of approximately 1K insertions per second, and 2K verified search operations per second.
|
Our system is complementary to digital preservation and distributed storage systems and may be used as a tool by each storage node in the system to proactively verify the integrity of its contents. As such, it is more suitable to systems where the complete file is stored in different nodes, such as LOCKSS @cite_32 , FreeNet @cite_6 , FarSite @cite_41 , and Publius @cite_28 . Systems that break files in pieces, such as Venti @cite_43 , OceanStore @cite_3 , CFS @cite_5 , Pastiche @cite_2 , Samsara @cite_19 , GridSharing @cite_12 , PASIS @cite_10 , and Glacier @cite_21 can still use IntegrityCatalog to proactively validate the pieces they own.
|
{
"cite_N": [
"@cite_41",
"@cite_28",
"@cite_21",
"@cite_32",
"@cite_6",
"@cite_3",
"@cite_43",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2121133177",
"2163598690",
"2116777751",
"2950945875",
"2174507869",
"2104210894",
"85380564",
"2148042433",
"1975868314",
"2150676586",
"2171337572",
"2126827550"
],
"abstract": [
"Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.",
"We describe a system that we have designed and implemented for publishing content on the web. Our publishing scheme has the property that it is very difficult for any adversary to censor or modify the content. In addition, the identity of the publisher is protected once the content is posted. Our system differs from others in that we provide tools for updating or deleting the published content, and users can browse the content in the normal point and click manner using a standard web browser and a client-side proxy that we provide. All of our code is freely available.",
"Decentralized storage systems aggregate the available disk space of participating computers to provide a large storage facility. These systems rely on data redundancy to ensure durable storage despite of node failures. However, existing systems either assume independent node failures, or they rely on introspection to carefully place redundant data on nodes with low expected failure correlation. Unfortunately, node failures are not independent in practice and constructing an accurate failure model is difficult in large-scale systems. At the same time, malicious worms that propagate through the Internet pose a real threat of large-scale correlated failures. Such rare but potentially catastrophic failures must be considered when attempting to provide highly durable storage. In this paper, we describe Glacier, a distributed storage system that relies on massive redundancy to mask the effect of large-scale correlated failures. Glacier is designed to aggressively minimize the cost of this redundancy in space and time: Erasure coding and garbage collection reduces the storage cost; aggregation of small objects and a loosely coupled maintenance protocol for redundant fragments minimizes the messaging cost. In one configuration, for instance, our system can provide six-nines durable storage despite correlated failures of up to 60 of the storage nodes, at the cost of an elevenfold storage overhead and an average messaging overhead of only 4 messages per node and minute during normal operation. Glacier is used as the storage layer for an experimental serverless email system.",
"The LOCKSS project has developed and deployed in a world-wide test a peer-to-peer system for preserving access to journals and other archival information published on the Web. It consists of a large number of independent, low-cost, persistent web caches that cooperate to detect and repair damage to their content by voting in opinion polls.'' Based on this experience, we present a design for and simulations of a novel protocol for voting in systems of this kind. It incorporates rate limitation and intrusion detection in new ways that ensure even an adversary capable of unlimited effort over decades has only a small probability of causing irrecoverable damage before being detected.",
"We describe Freenet, an adaptive peer-to-peer network application that permits the publication, replication, and retrieval of data while protecting the anonymity of both authors and readers. Freenet operates as a network of identical nodes that collectively pool their storage space to store data files and cooperate to route requests to the most likely physical location of data. No broadcast search or centralized location index is employed. Files are referred to in a location-independent manner, and are dynamically replicated in locations near requestors and deleted from locations where there is no interest. It is infeasible to discover the true origin or destination of a file passing through the network, and difficult for a node operator to determine or be held responsible for the actual physical contents of her own node.",
"OceanStore is a utility infrastructure designed to span the globe and provide continuous access to persistent information. Since this infrastructure is comprised of untrusted servers, data is protected through redundancy and cryptographic techniques. To improve performance, data is allowed to be cached anywhere, anytime. Additionally, monitoring of usage patterns allows adaptation to regional outages and denial of service attacks; monitoring also enhances performance through pro-active movement of data. A prototype implementation is currently under development.",
"",
"Peer-to-peer storage systems assume that their users consume resources in proportion to their contribution. Unfortunately, users are unlikely to do this without some enforcement mechanism. Prior solutions to this problem require centralized infrastructure, constraints on data placement, or ongoing administrative costs. All of these run counter to the design philosophy of peer-to-peer systems.Samsara enforces fairness in peer-to-peer storage systems without requiring trusted third parties, symmetric storage relationships, monetary payment, or certified identities. Each peer that requests storage of another must agree to hold a claim in return---a placeholder that accounts for available space. After an exchange, each partner checks the other to ensure faithfulness. Samsara punishes unresponsive nodes probabilistically. Because objects are replicated, nodes with transient failures are unlikely to suffer data loss, unlike those that are dishonest or chronically unavailable. Claim storage overhead can be reduced when necessary by forwarding among chains of nodes, and eliminated when cycles are created. Forwarding chains increase the risk of exposure to failure, but such risk is modest under reasonable assumptions of utilization and simultaneous, persistent failure.",
"Backup is cumbersome and expensive. Individual users almost never back up their data, and backup is a significant cost in large organizations. This paper presents Pastiche, a simple and inexpensive backup system. Pastiche exploits excess disk capacity to perform peer-to-peer backup with no administrative costs. Each node minimizes storage overhead by selecting peers that share a significant amount of data. It is easy for common installations to find suitable peers, and peers with high overlap can be identified with only hundreds of bytes. Pastiche provides mechanisms for confidentiality, integrity, and detection of failed or malicious peers. A Pastiche prototype suffers only 7.4 overhead for a modified Andrew Benchmark, and restore performance is comparable to cross-machine copy.",
"The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail.",
"This paper describes a decentralized consistency protocol for survivable storage that exploits local data versioning within each storage-node. Such versioning enables the protocol to efficiently provide linearizability and wait-freedom of read and write operations to erasure-coded data in asynchronous environments with Byzantine failures of clients and servers. By exploiting versioning storage-nodes, the protocol shifts most work to clients and allows highly optimistic operation: reads occur in a single round-trip unless clients observe concurrency or write failures. Measurements of a storage system prototype using this protocol show that it scales well with the number of failures tolerated, and its performance compares favorably with an efficient implementation of Byzantine-tolerant state machine replication.",
"We describe a novel approach for building a secure and fault tolerant data storage service in collaborative work environments, which uses perfect secret sharing schemes to store data. Perfect secret sharing schemes have found little use in managing generic data because of the high computation overheads incurred by such schemes. Our proposed approach uses a novel combination of XOR secret sharing and replication mechanisms, which drastically reduce the computation overheads and achieve speeds comparable to standard encryption schemes. The combination of secret sharing and replication manifests itself as an architectural framework, which has the attractive property that its dimension can be varied to exploit tradeoffs amongst different performance metrics. We evaluate the properties and performance of the proposed framework and show that the combination of perfect secret sharing and replication can be used to build efficient fault-tolerant and secure distributed data storage systems."
]
}
|
1403.1180
|
1764094450
|
Digital repositories, either digital preservation systems or archival systems, periodically check the integrity of stored objects to assure users of their correctness. To do so, prior solutions calculate integrity metadata and require the repository to store it alongside the actual data objects. This integrity metadata is essential for regularly verifying the correctness of the stored data objects. To safeguard and detect damage to this metadata, prior solutions rely on widely visible media, that is unaffiliated third parties, to store and provide back digests of the metadata to verify it is intact. However, they do not address recovery of the integrity metadata in case of damage or attack by an adversary. In essence, they do not preserve this metadata. We introduce IntegrityCatalog, a system that collects all integrity related metadata in a single component, and treats them as first class objects, managing both their integrity and their preservation. We introduce a treap-based persistent authenticated dictionary managing arbitrary length key value pairs, which we use to store all integrity metadata, accessible simply by object name. Additionally, IntegrityCatalog is a distributed system that includes a network protocol that manages both corruption detection and preservation of this metadata, using administrator-selected network peers with two possible roles. Verifiers store and offer attestations on digests and have minimal storage requirements, while preservers efficiently synchronize a complete copy of the catalog to assist in recovery in case of a detected catalog compromise on the local system. We describe our prototype implementation of IntegrityCatalog, measure its performance empirically, and demonstrate its effectiveness in real-world situations, with worst measured throughput of approximately 1K insertions per second, and 2K verified search operations per second.
|
Our solution is orthogonal to external audits by the owner of the objects @cite_27 , or by an external auditor @cite_4 .
|
{
"cite_N": [
"@cite_27",
"@cite_4"
],
"mid": [
"1536854407",
"2135568490"
],
"abstract": [
"We present a novel peer-to-peer backup technique that allows computers connected to the Internet to back up their data cooperatively: Each computer has a set of partner computers, which collectively hold its backup data. In return, it holds a part of each partner's backup data. By adding redundancy and distributing the backup data across many partners, a highly-reliable backup can be obtained in spite of the low reliability of the average Internet machine. Because our scheme requires cooperation, it is potentially vulnerable to several novel attacks involving free riding (e.g., holding a partner's data is costly, which tempts cheating) or disruption. We defend against these attacks using a number of new methods, including the use of periodic random challenges to ensure partners continue to hold data and the use of disk-space wasting to make cheating unprofitable. Results from an initial prototype show that our technique is feasible and very inexpensive: it appears to be one to two orders of magnitude cheaper than existing Internet backup services.",
"A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups. Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact. Unfortunately, no service is infallible. To make storage services accountable for data loss, we present protocols that allow a thirdparty auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer. Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts."
]
}
|
1403.1180
|
1764094450
|
Digital repositories, either digital preservation systems or archival systems, periodically check the integrity of stored objects to assure users of their correctness. To do so, prior solutions calculate integrity metadata and require the repository to store it alongside the actual data objects. This integrity metadata is essential for regularly verifying the correctness of the stored data objects. To safeguard and detect damage to this metadata, prior solutions rely on widely visible media, that is unaffiliated third parties, to store and provide back digests of the metadata to verify it is intact. However, they do not address recovery of the integrity metadata in case of damage or attack by an adversary. In essence, they do not preserve this metadata. We introduce IntegrityCatalog, a system that collects all integrity related metadata in a single component, and treats them as first class objects, managing both their integrity and their preservation. We introduce a treap-based persistent authenticated dictionary managing arbitrary length key value pairs, which we use to store all integrity metadata, accessible simply by object name. Additionally, IntegrityCatalog is a distributed system that includes a network protocol that manages both corruption detection and preservation of this metadata, using administrator-selected network peers with two possible roles. Verifiers store and offer attestations on digests and have minimal storage requirements, while preservers efficiently synchronize a complete copy of the catalog to assist in recovery in case of a detected catalog compromise on the local system. We describe our prototype implementation of IntegrityCatalog, measure its performance empirically, and demonstrate its effectiveness in real-world situations, with worst measured throughput of approximately 1K insertions per second, and 2K verified search operations per second.
|
Finally, Muniswamy- propose provenance information should be tracked by creating metadata objects and treating them as first class objects @cite_36 ; our system is complementary to theirs by taking over management of these metadata objects and assuring their full preservation. The same can be said for other provenance tracking systems, such as @cite_29 .
|
{
"cite_N": [
"@cite_36",
"@cite_29"
],
"mid": [
"1883937078",
"1575826986"
],
"abstract": [
"The cloud is poised to become the next computing environment for both data storage and computation due to its pay-as-you-go and provision-as-you-go models. Cloud storage is already being used to back up desktop user data, host shared scientific data, store web application data, and to serve web pages. Today's cloud stores, however, are missing an important ingredient: provenance. Provenance is metadata that describes the history of an object. We make the case that provenance is crucial for data stored on the cloud and identify the properties of provenance that enable its utility. We then examine current cloud offerings and design and implement three protocols for maintaining data provenance in current cloud stores. The protocols represent different points in the design space and satisfy different subsets of the provenance properties. Our evaluation indicates that the overheads of all three protocols are comparable to each other and reasonable in absolute terms. Thus, one can select a protocol based upon the properties it provides without sacrificing performance. While it is feasible to provide provenance as a layer on top of today's cloud offerings, we conclude by presenting the case for incorporating provenance as a core cloud feature, discussing the issues in doing so.",
"As increasing amounts of valuable information are produced and persist digitally, the ability to determine the origin of data becomes important. In science, medicine, commerce, and government, data provenance tracking is essential for rights protection, regulatory compliance, management of intelligence and medical data, and authentication of information as it flows through workplace tasks. In this paper, we show how to provide strong integrity and confidentiality assurances for data provenance information. We describe our provenance-aware system prototype that implements provenance tracking of data writes at the application layer, which makes it extremely easy to deploy. We present empirical results that show that, for typical real-life workloads, the run-time overhead of our approach to recording provenance with confidentiality and integrity guarantees ranges from 1 -13 ."
]
}
|
1403.0613
|
1993695231
|
Redundancy checking is an important task in the research of knowledge representation and reasoning. In this paper, we consider redundant qualitative constraints. For a set ? of qualitative constraints, we say a constraint ( x R y ) in ? is redundant if it is entailed by the rest of ?. A prime subnetwork of ? is a subset of ? which contains no redundant constraints and has the same solution set as ?. It is natural to ask how to compute such a prime subnetwork, and when it is unique. We show that this problem is in general intractable, but becomes tractable if ? is over a tractable subalgebra S of a qualitative calculus. Furthermore, if S is a subalgebra of the Region Connection Calculus RCC8 in which weak composition distributes over nonempty intersections, then ? has a unique prime subnetwork, which can be obtained in cubic time by removing all redundant constraints simultaneously from ?. As a by-product, we show that any path-consistent network over such a distributive subalgebra is minimal and globally consistent in a qualitative sense. A thorough empirical analysis of the prime subnetwork upon real geographical data sets demonstrates the approach is able to identify significantly more redundant constraints than previously proposed algorithms, especially in constraint networks with larger proportions of partial overlap relations.
|
Redundancy checking is an important task in AI research, in particular in knowledge representation and reasoning. For example, Ginsberg @cite_30 and Schmolze and Snyder @cite_48 designed algorithms for checking redundancy of knowledge bases; Gottlob and Ferm "u ller @cite_8 and Liberatore @cite_19 analysed the computational properties of removing redundancy from a clause and a CNF formula, respectively; and Grimm and Wissmann @cite_41 considered checking redundancy of ontologies.
|
{
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_41",
"@cite_48",
"@cite_19"
],
"mid": [
"229019754",
"2014117644",
"1483476496",
"2001515283",
"2058793155"
],
"abstract": [
"This paper presents a new approach, called knowledge-base reduction, to the problem of checking knowledge bases for inconsistency and redundancy. The algorithm presented here makes use of concepts and techniques that have recently been advocated by de Kleer [deKleer, 1986] in conjunction with an assumption-based truth maintenance system. Knowledge-base reduction is more comprehensive than previous approaches to this problem in that it can in principle detect all potential contradictions and redundancies that exist in knowledge bases (having expressive power equivalent to propositional logic). While any approach that makes such a guarantee must be computationally intractable in the worst case, experience with KB-Reducer - a system that implements a specialized version of knowledge-base reduction and is described in this paper - has demonstrated that this technique is feasible and effective for fairly complex \"real world\" knowledge bases. Although KB-Reducer is currently intended for use by expert system developers, it is also a first step in the direction of providing safe \"local end-user modifiability\" for distant \"sites\" in a nationwide network of expert systems.",
"Abstract This paper deals with the problem of removing redundant literals from a given clause. We first consider condensing , a weak type of redundancy elimination. A clause is condensed if it does not subsume any proper subset of itself. It is often useful (and sometimes necessary) to replace a non-condensed clause C by a condensation, i.e., by a condensed subset of C which is subsumed by C . After studying the complexity of an existing clause condensing algorithm, we present a more efficient algorithm and provide arguments for the optimality of the new method. We prove that testing whether a given clause is condensed is co-NP-complete and show that several problems related to clause condensing belong to complexity classes that are, probably, slightly harder than NP. We also consider a stronger version of redundancy elimination: a clause C is strongly condensed iff it does not contain any proper subset C ′ such that C logically implies C ′. We show that the problem of testing whether a clause is strongly condensed is undecidable.",
"Ontologies may contain redundancy in terms of axioms that logically follow from other axioms and that could be removed for the sake of consolidation and conciseness without changing the overall meaning. In this paper, we investigate methods for removing such redundancy from ontologies. We define notions around redundancy and discuss typical cases of redundancy and their relation to ontology engineering and evolution. We provide methods to compute irredundant ontologies both indirectly by calculating justifications, and directly by utilising a hitting set tree algorithm and module extraction techniques for optimization. Moreover, we report on experimental results on removing redundancy from existing ontologies available on the Web.",
"We present a general method for detecting wide classes of redundant production rules (PRs) based on the term rewrite semantics. We present the semantic account, define rule execution over both ground memories and memory schemas, and define redundancy for the PRs. From those definitions, an algorithm is developed that detects wide classes of redundant rules, and which improves upon the previously published methods.",
"A knowledge base is redundant if it contains parts that can be inferred from the rest of it. We study some problems related to the redundancy of a CNF formula. In particular, any CNF formula can be made irredundant by deleting some of its clauses: what results is an irredundant equivalent subset. We study the complexity of problems related to irredundant equivalent subsets: verification, checking existence of an irredundant equivalent subset with a given size, checking necessary and possible presence of clauses in irredundant equivalent subsets, and uniqueness. We also consider the problem of redundancy with different definitions of equivalence."
]
}
|
1403.0613
|
1993695231
|
Redundancy checking is an important task in the research of knowledge representation and reasoning. In this paper, we consider redundant qualitative constraints. For a set ? of qualitative constraints, we say a constraint ( x R y ) in ? is redundant if it is entailed by the rest of ?. A prime subnetwork of ? is a subset of ? which contains no redundant constraints and has the same solution set as ?. It is natural to ask how to compute such a prime subnetwork, and when it is unique. We show that this problem is in general intractable, but becomes tractable if ? is over a tractable subalgebra S of a qualitative calculus. Furthermore, if S is a subalgebra of the Region Connection Calculus RCC8 in which weak composition distributes over nonempty intersections, then ? has a unique prime subnetwork, which can be obtained in cubic time by removing all redundant constraints simultaneously from ?. As a by-product, we show that any path-consistent network over such a distributive subalgebra is minimal and globally consistent in a qualitative sense. A thorough empirical analysis of the prime subnetwork upon real geographical data sets demonstrates the approach is able to identify significantly more redundant constraints than previously proposed algorithms, especially in constraint networks with larger proportions of partial overlap relations.
|
In research on constraint satisfaction problems (CSPs), there are also many studies of constraint redundancy. While most of this research concerns redundant modelling (e.g., @cite_15 ), @cite_39 studied redundancy modulo a given local consistency. Their paper is close in spirit to ours. Let @math be a CSP and @math a local consistency. call a constraint @math in @math iff @math is @math -inconsistent. Because path-consistency implies consistency for RCC5 8 constraint networks over their maximal tractable subclasses, our notion of redundancy (when restricted to networks over these tractable subclasses) is equivalent to redundancy modulo path-consistency in the sense of @cite_39 .
|
{
"cite_N": [
"@cite_15",
"@cite_39"
],
"mid": [
"2070281006",
"1550964099"
],
"abstract": [
"A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with various degrees of constraint propagation for pruning the search space. One common technique to improve the execution efficiency is to add redundant constraints, which are constraints logically implied by others in the problem model. However, some redundant constraints are propagation redundant and hence do not contribute additional propagation information to the constraint solver. Redundant constraints arise naturally in the process of redundant modeling where two models of the same problem are connected and combined through channeling constraints. In this paper, we give general theorems for proving propagation redundancy of one constraint with respect to channeling constraints and constraints in the other model. We illustrate, on problems from CSPlib (http: www.csplib.org), how detecting and removing propagation redundant constraints in redundant modeling can speed up search by several order of magnitudes.",
"In this paper, we propose a new technique to compute irredundant sub-sets of constraint networks. Since, checking redundancy is Co-NP Complete problem, we use different polynomial local consistency entailments for reducing the computational complexity. The obtained constraint network is irredundant modulo a given local consistency. Redundant constraints are eliminated from the original instance producing an equivalent one with respect to satisfiability. Eliminating redundancy might help the CSP solver to direct the search to the most constrained (irredundant) part of the network."
]
}
|
1403.0613
|
1993695231
|
Redundancy checking is an important task in the research of knowledge representation and reasoning. In this paper, we consider redundant qualitative constraints. For a set ? of qualitative constraints, we say a constraint ( x R y ) in ? is redundant if it is entailed by the rest of ?. A prime subnetwork of ? is a subset of ? which contains no redundant constraints and has the same solution set as ?. It is natural to ask how to compute such a prime subnetwork, and when it is unique. We show that this problem is in general intractable, but becomes tractable if ? is over a tractable subalgebra S of a qualitative calculus. Furthermore, if S is a subalgebra of the Region Connection Calculus RCC8 in which weak composition distributes over nonempty intersections, then ? has a unique prime subnetwork, which can be obtained in cubic time by removing all redundant constraints simultaneously from ?. As a by-product, we show that any path-consistent network over such a distributive subalgebra is minimal and globally consistent in a qualitative sense. A thorough empirical analysis of the prime subnetwork upon real geographical data sets demonstrates the approach is able to identify significantly more redundant constraints than previously proposed algorithms, especially in constraint networks with larger proportions of partial overlap relations.
|
The property of distributivity was first used by van Beek @cite_52 for IA, but the notion of distributive subalgebra is new. It is not difficult to show that PA , IA, RCC5 and RCC8 all have two maximal distributive subalgebras (see for maximal distributive subalgebras of RCC5 8). Very interestingly, the two maximal distributive subalgebras of IA are exactly the subalgebras @math and @math discussed in @cite_16 , where Amaneddine and Condotta proved that @math and @math are the only maximal subalgebras of IA over which path-consistent networks are globally consistent. For RCC8, the maximal distributive subalgebra @math we identify in turns out to be the class of convex RCC8 relations found in @cite_18 , where Chandra and Pujari proved that path-consistent networks over @math are minimal. In we find another maximal distributive subalgebra for RCC8, which contains 64 relations. Furthermore, we also show that every path-consistent constraint network @math over a distributive subalgebra is weakly globally consistent and minimal. This has not been studied for RCC5 8 before.
|
{
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_52"
],
"mid": [
"2164745460",
"50869828",
"159907047"
],
"abstract": [
"The research in qualitative reasoning and in spatial CSP is always investigated in the backdrop of its temporal counterpart - qualitative temporal reasoning and TCSP. Unlike the case of interval algebra (IA), the composition table of RCC, IA's so-called spatial counterpart, is in general neither complete nor extensional, the compositional consistency can be still a valid reasoning mechanism. Even in such a restricted situation, many of the known properties of IA have not been investigated for validity in the context of RCC. We address, in this paper two such properties-convexity and minimality. The importance of minimality cannot be underestimated as in a minimal network every label is feasible and hence determining all the consistent scenarios can be accomplished very efficiently. It is known that path consistency does not yield a minimal network for tractable classes of RCC-8. We represent RCC-8 relations as a partially ordered set and exploit the properties of partial ordering to derive very interesting theoretical results. We show here that there exists a convex class of relations of RCC-8 for which path consistency yields a minimal network. Our results are very important as it gives a sufficient condition for minimality and useful to generate all consistent scenarios whenever compositional consistency is a valid reasoning mechanism",
"We study in this paper the problem of global consistency for qualitative constraints networks (QCNs) of the Point Algebra (PA) and the Interval Algebra (IA). In particular, we consider the subclass @math corresponding to the set of relations of PA except the relations ,= , and the subclass @math corresponding to pointizable relations of IA one can express by means of relations of @math . We prove that path-consistency implies global consistency for QCNs defined on these subclasses. Moreover, we show that with the subclasses corresponding to convex relations, there are unique greatest subclasses of PA and IA containing singleton relations satisfying this property.",
"We consider a representation for temporal relations between intervals introduced by James Allen, and its associated computational or reasoning problem: given possibly indefinite knowledge of the relations between some intervals, how do we compute the strongest possible assertions about the relations between some or all intervals. Determining exact solutions to this problem has been shown to be (almost assuredly) intractable. Allen gives an approximation algorithm based on constraint propagation. We giv e new approximation algorithms, examine their effectiveness, and determine under what conditions the algorithms are exact."
]
}
|
1403.0736
|
1663415130
|
We present an approximation scheme for support vector machine models that use an RBF kernel. A second-order Maclaurin series approximation is used for exponentials of inner products between support vectors and test instances. The approximation is applicable to all kernel methods featuring sums of kernel evaluations and makes no assumptions regarding data normalization. The prediction speed of approximated models no longer relates to the amount of support vectors but is quadratic in terms of the number of input dimensions. If the number of input dimensions is small compared to the amount of support vectors, the approximated model is significantly faster in prediction and has a smaller memory footprint. An optimized C++ implementation was made to assess the gain in prediction speed in a set of practical tests. We additionally provide a method to verify the approximation accuracy, prior to training models or during run-time, to ensure the loss in accuracy remains acceptable and within known bounds.
|
Pruning support vectors linearly increases prediction speed because the run-time complexity of models with RBF kernels is proportional to the amount of support vectors. Pruning methods have been devised for SVM @cite_26 @cite_27 and least squares SVM formulations @cite_23 @cite_4 .
|
{
"cite_N": [
"@cite_27",
"@cite_26",
"@cite_4",
"@cite_23"
],
"mid": [
"2162435043",
"1514876195",
"1511988855",
"1978996791"
],
"abstract": [
"Support vector machine (SVM) classifiers often contain many SVs, which lead to high computational cost at runtime and potential overfitting. In this paper, a practical and effective method of pruning SVM classifiers is systematically developed. The kernel row vectors, with one-to-one correspondence to the SVs, are first organized into clusters. The pruning work is divided into two phases. In the first phase, orthogonal projections (OPs) are performed to find kernel row vectors that can be approximated by the others. In the second phase, the previously found vectors are removed, and crosswise propagations, which simply utilize the coefficients of OPs, are implemented within each cluster. The method circumvents the problem of explicitly discerning SVs in the high-dimensional feature space as the SVM formulation does, and does not involve local minima. With different parameters, 3000 experiments were run on the LibSVM software platform. After pruning 42 of the SVs, the average change in classification accuracy was only - 0.7 , and the average computation time for removing one SV was 0.006 of the training time. In some scenarios, over 90 of the SVs were pruned with less than 0.1 reduction in classification accuracy. The experiments demonstrate the existence of large numbers of superabundant SVs in trained SVMs, and suggest a synergistic use of training and pruning in practice. Many SVMs already used in applications could be upgraded by pruning nearly half of their SVs.",
"Kernel-based learning methods provide their solutions as expansions in terms of a kernel. We consider the problem of reducing the computational complexity of evaluating these expansions by approximating them using fewer terms. As a by-product, we point out a connection between clustering and approximation in reproducing kernel Hilbert spaces generated by a particular class of kernels.",
"Least Squares Support Vector Machines (LS-SVM) is aproven method for classification and function approximation. In comparison to the standard Support Vector Machines (SVM) it only requires solving a linear system, but it lacks sparseness in the number of solution terms. Pruning can therefore be applied. Standard ways of pruning the LS-SVM consist of recursively solving the approximation problem and subsequently omitting data that have a small error in the previous pass and are based on support values. We suggest a slightly adapted variant that improves the performance significantly. We assess the relative regression performance of these pruning schemes in a comparison with two (for pruning adapted) subset selection schemes, -one based on the QR decomposition (supervised), one that searches the most representative feature vector span (unsupervised)-, random omission and backward selection on independent test sets in some benchmark experiments.",
"Least squares support vector machines (LS-SVM) is an SVM version which involves equality instead of inequality constraints and works with a least squares cost function. In this way, the solution follows from a linear Karush–Kuhn–Tucker system instead of a quadratic programming problem. However, sparseness is lost in the LS-SVM case and the estimation of the support values is only optimal in the case of a Gaussian distribution of the error variables. In this paper, we discuss a method which can overcome these two drawbacks. We show how to obtain robust estimates for regression by applying a weighted version of LS-SVM. We also discuss a sparse approximation procedure for weighted and unweighted LS-SVM. It is basically a pruning method which is able to do pruning based upon the physical meaning of the sorted support values, while pruning procedures for classical multilayer perceptrons require the computation of a Hessian matrix or its inverse. The methods of this paper are illustrated for RBF kernels and demonstrate how to obtain robust estimates with selection of an appropriate number of hidden units, in the case of outliers or non-Gaussian error distributions with heavy tails. c 2002 Elsevier Science B.V. All rights reserved."
]
}
|
1403.0921
|
1967087957
|
Significant efforts have gone into the development of statistical models for analyzing data in the form of networks, such as social networks. Most existing work has focused on modeling static networks, which represent either a single time snapshot or an aggregate view over time. There has been recent interest in statistical modeling of dynamic networks, which are observed at multiple points in time and offer a richer representation of many complex phenomena. In this paper, we present a state-space model for dynamic networks that extends the well-known stochastic blockmodel for static networks to the dynamic setting. We fit the model in a near-optimal manner using an extended Kalman filter (EKF) augmented with a local search. We demonstrate that the EKF-based algorithm performs competitively with a state-of-the-art algorithm based on Markov chain Monte Carlo sampling but is significantly less computationally demanding.
|
Several statistical models for dynamic networks have previously been proposed for modeling and tracking dynamic networks @cite_8 . @cite_24 proposed a temporal extension of the exponential random graph model (ERGM) called the hidden temporal ERGM. Sarkar and Moore @cite_16 proposed a temporal extension of the latent space network model and developed an algorithm to compute point estimates of node positions over time using conjugate gradient optimization initialized from a multidimensional scaling solution. In @cite_9 , proposed a Gaussian approximation that allowed for approximate inference on the dynamic latent space model using Kalman filtering. The approach of @cite_9 is similar in flavor to the approach we employ in this paper; however, our approach involves a different static model, namely the stochastic blockmodel, for the network snapshots and uses this model to develop an extended Kalman filter (EKF) to track the model parameters.
|
{
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_16",
"@cite_8"
],
"mid": [
"2116208407",
"1536734652",
"2039750798",
"2032005951"
],
"abstract": [
"A plausible representation of relational information among entities in dynamic systems such as a living cell or a social community is a stochastic network which is topologically rewiring and semantically evolving over time. While there is a rich literature on modeling static or temporally invariant networks, much less has been done toward modeling the dynamic processes underlying rewiring networks, and on recovering such networks when they are not observable. We present a class of hidden temporal exponential random graph models (htERGMs) to study the yet unexplored topic of modeling and recovering temporally rewiring networks from time series of node attributes such as activities of social actors or expression levels of genes. We show that one can reliably infer the latent time-specific topologies of the evolving networks from the observation. We report empirical results on both synthetic data and a Drosophila lifecycle gene expression data set, in comparison with a static counterpart of htERGM.",
"We consider dynamic co-occurrence data, such as author-word links in papers published in successive years of the same conference. For static co-occurrence data, researchers often seek an embedding of the entities (authors and words) into a lowdimensional Euclidean space. We generalize a recent static co-occurrence model, the CODE model of (2004), to the dynamic setting: we seek coordinates for each entity at each time step. The coordinates can change with time to explain new observations, but since large changes are improbable, we can exploit data at previous and subsequent steps to find a better explanation for current observations. To make inference tractable, we show how to approximate our observation model with a Gaussian distribution, allowing the use of a Kalman filter for tractable inference. The result is the first algorithm for dynamic embedding of co-occurrence data which provides distributional information for its coordinate estimates. We demonstrate our model both on synthetic data and on author-word data from the NIPS corpus, showing that it produces intuitively reasonable embeddings. We also provide evidence for the usefulness of our model by its performance on an authorprediction task.",
"This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in p-dimensional Euclidean latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (sub-quadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional KD-trees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and real-world data on up to 11,000 entities which indicate near-linear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS co-authorship data.",
"Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active \"network community\" and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online \"networking communities\" such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry point to this burgeoning literature. We begin with an overview of the historical development of statistical network modeling and then we introduce a number of examples that have been studied in the network literature. Our subsequent discussion focuses on a number of prominent static and dynamic network models and their interconnections. We emphasize formal model descriptions, and pay special attention to the interpretation of parameters and their estimation. We end with a description of some open problems and challenges for machine learning and statistics."
]
}
|
1403.0921
|
1967087957
|
Significant efforts have gone into the development of statistical models for analyzing data in the form of networks, such as social networks. Most existing work has focused on modeling static networks, which represent either a single time snapshot or an aggregate view over time. There has been recent interest in statistical modeling of dynamic networks, which are observed at multiple points in time and offer a richer representation of many complex phenomena. In this paper, we present a state-space model for dynamic networks that extends the well-known stochastic blockmodel for static networks to the dynamic setting. We fit the model in a near-optimal manner using an extended Kalman filter (EKF) augmented with a local search. We demonstrate that the EKF-based algorithm performs competitively with a state-of-the-art algorithm based on Markov chain Monte Carlo sampling but is significantly less computationally demanding.
|
Hoff @cite_14 proposed a dynamic latent factor model analogous to an eigenvalue decomposition with time-invariant eigenvectors and time-varying eigenvalues. The model is applicable to many types of data in the form of multi-way arrays, including dynamic social networks, and is fit using MCMC sampling. In @cite_38 , Lee and Priebe proposed a latent process model for attributed (multi-relational) dynamic networks using random dot product spaces. The authors fit mathematically tractable first- and second-order approximations of the random dot process model, for which individual network snapshots are drawn from attributed versions of the Erd o s-R ' e nyi and latent space models, respectively. Perry and Wolfe @cite_20 proposed a point process model for dynamic networks of directed interactions and a partial likelihood inference procedure to fit their model. The authors model interactions using a multivariate counting process that accounts for effects including homophily. Their model operates in continuous time, unlike the proposed model in this paper, which operates on discrete-time snapshots.
|
{
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_20"
],
"mid": [
"2081324234",
"2075517844",
"2180313959"
],
"abstract": [
"We introduce a latent process model for time series of attributed random graphs for characterizing multiple modes of association among a collection of actors over time. Two mathematically tractable approximations are derived, and we examine the performance of a class of test statistics for an illustrative change-point detection problem and demonstrate that the analysis through approximation can provide valuable information regarding inference properties.",
"Reduced-rank decompositions provide descriptions of the variation among the elements of a matrix or array. In such decompositions, the elements of an array are expressed as products of low-dimensional latent factors. This article presents a model-based version of such a decomposition, extending the scope of reduced-rank methods to accommodate a variety of data types such as longitudinal social networks and continuous multivariate data that are cross-classified by categorical variables. The proposed model-based approach is hierarchical, in that the latent factors corresponding to a given dimension of the array are not a priori independent, but exchangeable. Such a hierarchical approach allows more flexibility in the types of patterns that can be represented.",
"Abstract : Network data often take the form of repeated interactions between senders and receivers tabulated over time. A primary question to ask of such data is which traits and behaviors are predictive of interaction. To answer this question, a model is introduced for treating directed interactions as a multivariate point process: a Cox multiplicative intensity model using covariates that depend on the history of the process. Consistency and asymptotic normality are proved for the resulting partial-likelihood-based estimators under suitable regularity conditions, and an efficient fitting procedure is described. Multicast interactions--those involving a single sender but multiple receivers--are treated explicitly. The resulting inferential framework is then employed to model message sending behavior in a corporate e-mail network. The analysis gives a precise quantification of which static shared traits and dynamic network effects are predictive of message recipient selection."
]
}
|
1403.0921
|
1967087957
|
Significant efforts have gone into the development of statistical models for analyzing data in the form of networks, such as social networks. Most existing work has focused on modeling static networks, which represent either a single time snapshot or an aggregate view over time. There has been recent interest in statistical modeling of dynamic networks, which are observed at multiple points in time and offer a richer representation of many complex phenomena. In this paper, we present a state-space model for dynamic networks that extends the well-known stochastic blockmodel for static networks to the dynamic setting. We fit the model in a near-optimal manner using an extended Kalman filter (EKF) augmented with a local search. We demonstrate that the EKF-based algorithm performs competitively with a state-of-the-art algorithm based on Markov chain Monte Carlo sampling but is significantly less computationally demanding.
|
More closely related to the state-space dynamic network model we consider in this paper are several temporal extensions of stochastic blockmodels (SBMs). @cite_29 and @cite_2 proposed temporal extensions of a mixed-membership version of the SBM using linear state-space models for the real-valued class memberships. In @cite_18 , proposed a temporal extension of the SBM that is similar to our proposed model. The main difference is that the authors explicitly modeled nodes changing between classes over time by using a transition matrix that specifies the probability that a node in class @math at time @math switches to class @math at time @math for all @math . The authors fit the model using a combination of Gibbs sampling and simulated annealing, which they refer to as (PSA). We use the performance of the PSA algorithm as a baseline for comparison with the less computationally demanding EKF-based approximate inference procedure we utilize in this paper.
|
{
"cite_N": [
"@cite_29",
"@cite_18",
"@cite_2"
],
"mid": [
"2047309056",
"2168878667",
"316899516"
],
"abstract": [
"In a dynamic social or biological environment, the interactions between the actors can undergo large and systematic changes. In this paper we propose a model-based approach to analyze what we will refer to as the dynamic tomography of such time-evolving networks. Our approach offers an intuitive but powerful tool to infer the semantic underpinnings of each actor, such as its social roles or biological functions, underlying the observed network topologies. Our model builds on earlier work on a mixed membership stochastic blockmodel for static networks, and the state-space model for tracking object trajectory. It overcomes a major limitation of many current network inference techniques, which assume that each actor plays a unique and invariant role that accounts for all its interactions with other actors; instead, our method models the role of each actor as a time-evolving mixed membership vector that allows actors to behave differently over time and carry out different roles functions when interacting with different peers, which is closer to reality. We present an efficient algorithm for approximate inference and learning using our model; and we applied our model to analyze a social network between monks (i.e., the Sampson's network), a dynamic email communication network between the Enron employees, and a rewiring gene interaction network of fruit fly collected during its full life cycle. In all cases, our model reveals interesting patterns of the dynamic roles of the actors.",
"Although a large body of work is devoted to finding communities in static social networks, only a few studies examined the dynamics of communities in evolving social networks. In this paper, we propose a dynamic stochastic block model for finding communities and their evolution in a dynamic social network. The proposed model captures the evolution of communities by explicitly modeling the transition of community memberships for individual nodes in the network. Unlike many existing approaches for modeling social networks that estimate parameters by their most likely values (i.e., point estimation), in this study, we employ a Bayesian treatment for parameter estimation that computes the posterior distributions for all the unknown parameters. This Bayesian treatment allows us to capture the uncertainty in parameter values and therefore is more robust to data noise than point estimation. In addition, an efficient algorithm is developed for Bayesian inference to handle large sparse social networks. Extensive experimental studies based on both synthetic data and real-life data demonstrate that our model achieves higher accuracy and reveals more insights in the data than several state-of-the-art algorithms.",
""
]
}
|
1403.0461
|
2125087485
|
We propose a timed and soft extension of Concurrent Constraint Programming. The time extension is based on the hypothesis of bounded asynchrony: the computation takes a bounded period of time and is measured by a discrete global clock. Action prefixing is then considered as the syntactic marker which distinguishes a time instant from the next one. Supported by soft constraints instead of crisp ones, tell and ask agents are now equipped with a preference (or consistency) threshold which is used to determine their success or suspension. In the paper we provide a language to describe the agents behavior, together with its operational and denotational semantics, for which we also prove the compositionality and correctness properties. After presenting a semantics using maximal parallelism of actions, we also describe a version for their interleaving on a single processor (with maximal parallelism for time elapsing). Coordinating agents that need to take decisions both on preference values and time events may benefit from this language. To appear in Theory and Practice of Logic Programming (TPLP).
|
By comparing this work with other timed languages using crisp constraints (instead of soft ones as in this paper) as @cite_27 @cite_12 , there are three main differences we can find out.
|
{
"cite_N": [
"@cite_27",
"@cite_12"
],
"mid": [
"1994157580",
"2143082793"
],
"abstract": [
"Abstract Synchronous programming (Berry, 1989) is a powerful approach to programming reactive systems. Following the idea that “processes are relations extended over time” (Abramsky, 1993), we propose a simple but powerful model for timed, determinate computation, extending the closure-operator model for untimed concurrent constraint programming (CCP). In . (1994a) we had proposed a model for this called tcc—here we extend the model of tcc to express strong time-outs: if an event A does not happen through time t , cause event B to happen at time t . Such constructs arise naturally in practice (e.g. in modeling transistors) and are supported in synchronous programming languages. The fundamental conceptual difficulty posed by these operations is that they are non-monotonic. We provide compositional semantics to the non-monotonic version of concurrent constraint programming (Default cc) obtained by changing the underlying logic from intuitionistic logic to Reiter's default logic. This allows us to use the same construction (uniform extension through time) to develop Default cc as we had used to develop tcc from cc. Indeed the smooth embedding of cc processes into Default cc processes lifts to a smooth embedding of tcc processes into Default cc processes. We identify a basic set of combinators (that constitute the Default cc programming framework), and provide constructive operational semantics (implemented by us as an interpreter) for which the model is fully abstract. We show that the model is expressive by defining combinators from the synchronous languages. We show that Default cc is compositional and supports the properties of multiform time, orthogonal pre-emption and executable specifications. In addition, Default cc programs can be read as logical formulae (in an intuitionistic temporal logic)—we show that this logic is sound and complete for reasoning about (in)equivalence of Default cc programs. Like the synchronous languages, Default cc programs can be compiled into finite state automata. In addition, the translation can be specified compositionally. This enables separate compilation of Default cc programs and run-time tradeoffs between partial compilation and interpretation. A preliminary version of this paper was published as . (1995). Here we present a complete treatment of hiding, along with a detailed treatment of the model.",
"We develop a model for timed, reactive computation by extending the asynchronous, untimed concurrent constraint programming model in a simple and uniform way. In the spirit of process algebras, we develop some combinators expressible in this model, and reconcile their operational, logical and denotational character. We show how programs may be compiled into finite-state machines with loop-free computations at each state, thus guaranteeing bounded response time. >"
]
}
|
1403.0461
|
2125087485
|
We propose a timed and soft extension of Concurrent Constraint Programming. The time extension is based on the hypothesis of bounded asynchrony: the computation takes a bounded period of time and is measured by a discrete global clock. Action prefixing is then considered as the syntactic marker which distinguishes a time instant from the next one. Supported by soft constraints instead of crisp ones, tell and ask agents are now equipped with a preference (or consistency) threshold which is used to determine their success or suspension. In the paper we provide a language to describe the agents behavior, together with its operational and denotational semantics, for which we also prove the compositionality and correctness properties. After presenting a semantics using maximal parallelism of actions, we also describe a version for their interleaving on a single processor (with maximal parallelism for time elapsing). Coordinating agents that need to take decisions both on preference values and time events may benefit from this language. To appear in Theory and Practice of Logic Programming (TPLP).
|
A second difference relies in the transfer of information across time boundaries. In @cite_12 and @cite_27 , the programmer has to explicitly transfer the (positive) information from a time instant to the next one, by using special primitives that allow one to control the temporal evolution of the system. In fact, at the end of a time interval all the constraints accumulated and all the processes suspended are discarded, unless they are arguments to a specific primitive. On the contrary, no explicit transfer is needed in tsccp , since the computational model is based on the monotonic evolution of the store which is usual in ccp .
|
{
"cite_N": [
"@cite_27",
"@cite_12"
],
"mid": [
"1994157580",
"2143082793"
],
"abstract": [
"Abstract Synchronous programming (Berry, 1989) is a powerful approach to programming reactive systems. Following the idea that “processes are relations extended over time” (Abramsky, 1993), we propose a simple but powerful model for timed, determinate computation, extending the closure-operator model for untimed concurrent constraint programming (CCP). In . (1994a) we had proposed a model for this called tcc—here we extend the model of tcc to express strong time-outs: if an event A does not happen through time t , cause event B to happen at time t . Such constructs arise naturally in practice (e.g. in modeling transistors) and are supported in synchronous programming languages. The fundamental conceptual difficulty posed by these operations is that they are non-monotonic. We provide compositional semantics to the non-monotonic version of concurrent constraint programming (Default cc) obtained by changing the underlying logic from intuitionistic logic to Reiter's default logic. This allows us to use the same construction (uniform extension through time) to develop Default cc as we had used to develop tcc from cc. Indeed the smooth embedding of cc processes into Default cc processes lifts to a smooth embedding of tcc processes into Default cc processes. We identify a basic set of combinators (that constitute the Default cc programming framework), and provide constructive operational semantics (implemented by us as an interpreter) for which the model is fully abstract. We show that the model is expressive by defining combinators from the synchronous languages. We show that Default cc is compositional and supports the properties of multiform time, orthogonal pre-emption and executable specifications. In addition, Default cc programs can be read as logical formulae (in an intuitionistic temporal logic)—we show that this logic is sound and complete for reasoning about (in)equivalence of Default cc programs. Like the synchronous languages, Default cc programs can be compiled into finite state automata. In addition, the translation can be specified compositionally. This enables separate compilation of Default cc programs and run-time tradeoffs between partial compilation and interpretation. A preliminary version of this paper was published as . (1995). Here we present a complete treatment of hiding, along with a detailed treatment of the model.",
"We develop a model for timed, reactive computation by extending the asynchronous, untimed concurrent constraint programming model in a simple and uniform way. In the spirit of process algebras, we develop some combinators expressible in this model, and reconcile their operational, logical and denotational character. We show how programs may be compiled into finite-state machines with loop-free computations at each state, thus guaranteeing bounded response time. >"
]
}
|
1403.0461
|
2125087485
|
We propose a timed and soft extension of Concurrent Constraint Programming. The time extension is based on the hypothesis of bounded asynchrony: the computation takes a bounded period of time and is measured by a discrete global clock. Action prefixing is then considered as the syntactic marker which distinguishes a time instant from the next one. Supported by soft constraints instead of crisp ones, tell and ask agents are now equipped with a preference (or consistency) threshold which is used to determine their success or suspension. In the paper we provide a language to describe the agents behavior, together with its operational and denotational semantics, for which we also prove the compositionality and correctness properties. After presenting a semantics using maximal parallelism of actions, we also describe a version for their interleaving on a single processor (with maximal parallelism for time elapsing). Coordinating agents that need to take decisions both on preference values and time events may benefit from this language. To appear in Theory and Practice of Logic Programming (TPLP).
|
A third relevant difference is in @cite_12 and @cite_27 the authors present deterministic languages while our language allows for nondeterminism. These three differences also hold between @cite_12 or @cite_27 , and the original crisp version of the language, i.e., tccp @cite_8 .
|
{
"cite_N": [
"@cite_27",
"@cite_12",
"@cite_8"
],
"mid": [
"1994157580",
"2143082793",
"2749047051"
],
"abstract": [
"Abstract Synchronous programming (Berry, 1989) is a powerful approach to programming reactive systems. Following the idea that “processes are relations extended over time” (Abramsky, 1993), we propose a simple but powerful model for timed, determinate computation, extending the closure-operator model for untimed concurrent constraint programming (CCP). In . (1994a) we had proposed a model for this called tcc—here we extend the model of tcc to express strong time-outs: if an event A does not happen through time t , cause event B to happen at time t . Such constructs arise naturally in practice (e.g. in modeling transistors) and are supported in synchronous programming languages. The fundamental conceptual difficulty posed by these operations is that they are non-monotonic. We provide compositional semantics to the non-monotonic version of concurrent constraint programming (Default cc) obtained by changing the underlying logic from intuitionistic logic to Reiter's default logic. This allows us to use the same construction (uniform extension through time) to develop Default cc as we had used to develop tcc from cc. Indeed the smooth embedding of cc processes into Default cc processes lifts to a smooth embedding of tcc processes into Default cc processes. We identify a basic set of combinators (that constitute the Default cc programming framework), and provide constructive operational semantics (implemented by us as an interpreter) for which the model is fully abstract. We show that the model is expressive by defining combinators from the synchronous languages. We show that Default cc is compositional and supports the properties of multiform time, orthogonal pre-emption and executable specifications. In addition, Default cc programs can be read as logical formulae (in an intuitionistic temporal logic)—we show that this logic is sound and complete for reasoning about (in)equivalence of Default cc programs. Like the synchronous languages, Default cc programs can be compiled into finite state automata. In addition, the translation can be specified compositionally. This enables separate compilation of Default cc programs and run-time tradeoffs between partial compilation and interpretation. A preliminary version of this paper was published as . (1995). Here we present a complete treatment of hiding, along with a detailed treatment of the model.",
"We develop a model for timed, reactive computation by extending the asynchronous, untimed concurrent constraint programming model in a simple and uniform way. In the spirit of process algebras, we develop some combinators expressible in this model, and reconcile their operational, logical and denotational character. We show how programs may be compiled into finite-state machines with loop-free computations at each state, thus guaranteeing bounded response time. >",
""
]
}
|
1403.0461
|
2125087485
|
We propose a timed and soft extension of Concurrent Constraint Programming. The time extension is based on the hypothesis of bounded asynchrony: the computation takes a bounded period of time and is measured by a discrete global clock. Action prefixing is then considered as the syntactic marker which distinguishes a time instant from the next one. Supported by soft constraints instead of crisp ones, tell and ask agents are now equipped with a preference (or consistency) threshold which is used to determine their success or suspension. In the paper we provide a language to describe the agents behavior, together with its operational and denotational semantics, for which we also prove the compositionality and correctness properties. After presenting a semantics using maximal parallelism of actions, we also describe a version for their interleaving on a single processor (with maximal parallelism for time elapsing). Coordinating agents that need to take decisions both on preference values and time events may benefit from this language. To appear in Theory and Practice of Logic Programming (TPLP).
|
In @cite_1 , the authors generalize the model in @cite_12 in order to extend it with temporary parametric ask operations. Intuitively, these operations behave as persistent parametric asks during a time-interval, but may disappear afterwards. The presented extension goes in the direction of better modeling with the use of private channels between the agents. However, also the agents in @cite_1 show a deterministic behavior, instead of our not-deterministic choice.
|
{
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"2168318109",
"2143082793"
],
"abstract": [
"In this doctoral work we aim at developing a rich timed concurrent constraint (tcc) based language with strong ties to logic. The new calculus called Universal Timed Concurrent Constraint (utcc) increases the expressiveness of tcc languages allowing infinite behaviour and mobility. We introduce a constructor of the form (abs x, c)P (Abstraction in P) that can be viewed as a dual operator of the hidden operator local x in P. i.e. the later can be viewed as an existential quantification on the variable x and the former as an universal quantification of x, executing P[t x] for all t s.t. the current store entails c[t x]. As a compelling application, we applied this calculus to verify security protocols.",
"We develop a model for timed, reactive computation by extending the asynchronous, untimed concurrent constraint programming model in a simple and uniform way. In the spirit of process algebras, we develop some combinators expressible in this model, and reconcile their operational, logical and denotational character. We show how programs may be compiled into finite-state machines with loop-free computations at each state, thus guaranteeing bounded response time. >"
]
}
|
1403.0461
|
2125087485
|
We propose a timed and soft extension of Concurrent Constraint Programming. The time extension is based on the hypothesis of bounded asynchrony: the computation takes a bounded period of time and is measured by a discrete global clock. Action prefixing is then considered as the syntactic marker which distinguishes a time instant from the next one. Supported by soft constraints instead of crisp ones, tell and ask agents are now equipped with a preference (or consistency) threshold which is used to determine their success or suspension. In the paper we provide a language to describe the agents behavior, together with its operational and denotational semantics, for which we also prove the compositionality and correctness properties. After presenting a semantics using maximal parallelism of actions, we also describe a version for their interleaving on a single processor (with maximal parallelism for time elapsing). Coordinating agents that need to take decisions both on preference values and time events may benefit from this language. To appear in Theory and Practice of Logic Programming (TPLP).
|
In literature we can find other proposals that are related to tuple-based kernel-languages instead of a constraint store, as @cite_24 () or @cite_18 () for instance. These languages are designed to study different properties of systems, as mobility and autonomicity of modeled agents. Their basic specification do not encompass time-based primitives, while mobility features are not present in any of the constraint-based languages reported in this section. The purpose of our language is to model systems where a level of preference and time-sensitive primitives (as a timeout) is required: a good example is represented by agents participating to an auction, as the example given in .
|
{
"cite_N": [
"@cite_24",
"@cite_18"
],
"mid": [
"2161353020",
"194258645"
],
"abstract": [
"We investigate the issue of designing a kernel programming language for mobile computing and describe KLAIM, a language that supports a programming paradigm where processes, like data, can be moved from one computing environment to another. The language consists of a core Linda with multiple tuple spaces and of a set of operators for building processes. KLAIM naturally supports programming with explicit localities. Localities are first-class data (they can be manipulated like any other data), but the language provides coordination mechanisms to control the interaction protocols among located processes. The formal operational semantics is useful for discussing the design of the language and provides guidelines for implementations. KLAIM is equipped with a type system that statically checks access right violations of mobile agents. Types are used to describe the intentions (read, write, execute, etc.) of processes in relation to the various localities. The type system is used to determine the operations that processes want to perform at each locality, and to check whether they comply with the declared intentions and whether they have the necessary rights to perform the intended operations at the specific localities. Via a series of examples, we show that many mobile code programming paradigms can be naturally implemented in our kernel language. We also present a prototype implementation of KLAIM in Java.",
"SCEL is a new language specifically designed to model autonomic components and their interaction. It brings together various programming abstractions that permit to directly represent knowledge, behaviors and aggregations according to specific policies. It also supports naturally programming self-awareness, context-awareness, and adaptation. In this paper, we first present design principles, syntax and operational semantics of SCEL. Then, we show how a dialect can be defined by appropriately instantiating the features of the language we left open to deal with different application domains and use this dialect to model a simple, yet illustrative, example application. Finally, we demonstrate that adaptation can be naturally expressed in SCEL."
]
}
|
1402.7063
|
1770242755
|
A @math -nearest neighbor ( @math NN) query determines the @math nearest points, using distance metrics, from a specific location. An all @math -nearest neighbor (A @math NN) query constitutes a variation of a @math NN query and retrieves the @math nearest points for each point inside a database. Their main usage resonates in spatial databases and they consist the backbone of many location-based applications and not only (i.e. @math NN joins in databases, classification in data mining). So, it is very crucial to develop methods that answer them efficiently. In this work, we propose a novel method for classifying multidimensional data using an A @math NN algorithm in the MapReduce framework. Our approach exploits space decomposition techniques for processing the classification procedure in a parallel and distributed manner. To our knowledge, we are the first to study the classification of multidimensional objects under this perspective. Through an extensive experimental evaluation we prove that our solution is efficient and scalable in processing the given queries. We investigate many different perspectives that can affect the total computational cost, such as different dataset distributions, number of dimensions, growth of @math value and granularity of space decomposition and prove that our system is efficient, robust and scalable.
|
In @cite_3 , locality sensitive hashing (LSH) is used together with a MapReduce implementation for processing @math NN queries over large multidimensional datasets. This solution suggests an approximate algorithm like the work in @cite_18 (H-zkNNJ) but we focus on exact processing A @math NN queries. Furthermore, A @math NN queries are utilized along with MapReduce to speed up and optimize the join process over different datasets @cite_21 @cite_5 or support non-equi joins @cite_11 . Moreover, @cite_17 makes use of a R-tree based method to process @math NN joins efficiently.
|
{
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_17",
"@cite_3",
"@cite_5",
"@cite_11"
],
"mid": [
"2154879298",
"2075620950",
"2058458206",
"596485744",
"2049003051",
"2151930506"
],
"abstract": [
"In data mining applications and spatial and multimedia databases, a useful tool is the kNN join, which is to produce the k nearest neighbors (NN), from a dataset S, of every point in a dataset R. Since it involves both the join and the NN search, performing kNN joins efficiently is a challenging task. Meanwhile, applications continue to witness a quick (exponential in some cases) increase in the amount of data to be processed. A popular model nowadays for large-scale data processing is the shared-nothing cluster on a number of commodity machines using MapReduce [6]. Hence, how to execute kNN joins efficiently on large data that are stored in a MapReduce cluster is an intriguing problem that meets many practical needs. This work proposes novel (exact and approximate) algorithms in MapReduce to perform efficient parallel kNN joins on large data. We demonstrate our ideas using Hadoop. Extensive experiments in large real and synthetic datasets, with tens or hundreds of millions of records in both R and S and up to 30 dimensions, have demonstrated the efficiency, effectiveness, and scalability of our methods.",
"Implementations of map-reduce are being used to perform many operations on very large data. We examine strategies for joining several relations in the map-reduce environment. Our new approach begins by identifying the \"map-key,\" the set of attributes that identify the Reduce process to which a Map process must send a particular tuple. Each attribute of the map-key gets a \"share,\" which is the number of buckets into which its values are hashed, to form a component of the identifier of a Reduce process. Relations have their tuples replicated in limited fashion, the degree of replication depending on the shares for those map-key attributes that are missing from their schema. We study the problem of optimizing the shares, given a fixed number of Reduce processes. An algorithm for detecting and fixing problems where an attribute is \"mistakenly\" included in the map-key is given. Then, we consider two important special cases: chain joins and star joins. In each case we are able to determine the map-key and determine the shares that yield the least replication. While the method we propose is not always superior to the conventional way of using map-reduce to implement joins, there are some important cases involving large-scale data where our method wins, including: (1) analytic queries in which a very large fact table is joined with smaller dimension tables, and (2) queries involving paths through graphs with high out-degree, such as the Web or a social network.",
"The similarity join has become an important database primitive for supporting similarity searches and data mining. A similarity join combines two sets of complex objects such that the result contains all pairs of similar objects. Two types of the similarity join are well-known, the distance range join, in which the user defines a distance threshold for the join, and the closest pair query or k-distance join, which retrieves the k most similar pairs. In this paper, we propose an important, third similarity join operation called the k-nearest neighbour join, which combines each point of one point set with its k nearest neighbours in the other set. We discover that many standard algorithms of Knowledge Discovery in Databases (KDD) such as k-means and k-medoid clustering, nearest neighbour classification, data cleansing, postprocessing of sampling-based data mining, etc. can be implemented on top of the k-nn join operation to achieve performance improvements without affecting the quality of the result of these algorithms. We propose a new algorithm to compute the k-nearest neighbour join using the multipage index (MuX), a specialised index structure for the similarity join. To reduce both CPU and I O costs, we develop optimal loading and processing strategies.",
"We consider the problem of processing K-Nearest Neighbor (KNN) queries over large datasets where the index is jointly maintained by a set of machines in a computing cluster. The proposed RankReduce approach uses locality sensitive hashing (LSH) together with a MapReduce implementation, which by design is a perfect match as the hashing principle of LSH can be smoothly integrated in the mapping phase of MapReduce. The LSH algorithm assigns similar objects to the same fragments in the distributed file system which enables a effective selection of potential candidate neighbors which get then reduced to the set of K-Nearest Neighbors. We address problems arising due to the different characteristics of MapReduce and LSH to achieve an efficient search process on the one hand and high LSH accuracy on the other hand. We discuss several pitfalls and detailed descriptions on how to circumvent these. We evaluate RankReduce using both synthetic data and a dataset obtained from Flickr.com demonstrating the suitability of the approach.",
"k nearest neighbor join (kNN join), designed to find k nearest neighbors from a dataset S for every object in another dataset R, is a primitive operation widely adopted by many data mining applications. As a combination of the k nearest neighbor query and the join operation, kNN join is an expensive operation. Given the increasing volume of data, it is difficult to perform a kNN join on a centralized machine efficiently. In this paper, we investigate how to perform kNN join using MapReduce which is a well-accepted framework for data-intensive applications over clusters of computers. In brief, the mappers cluster objects into groups; the reducers perform the kNN join on each group of objects separately. We design an effective mapping mechanism that exploits pruning rules for distance filtering, and hence reduces both the shuffling and computational costs. To reduce the shuffling cost, we propose two approximate algorithms to minimize the number of replicas. Extensive experiments on our in-house cluster demonstrate that our proposed methods are efficient, robust and scalable.",
"In this paper we study how to efficiently perform set-similarity joins in parallel using the popular MapReduce framework. We propose a 3-stage approach for end-to-end set-similarity joins. We take as input a set of records and output a set of joined records based on a set-similarity condition. We efficiently partition the data across nodes in order to balance the workload and minimize the need for replication. We study both self-join and R-S join cases, and show how to carefully control the amount of data kept in main memory on each node. We also propose solutions for the case where, even if we use the most fine-grained partitioning, the data still does not fit in the main memory of a node. We report results from extensive experiments on real datasets, synthetically increased in size, to evaluate the speedup and scaleup properties of the proposed algorithms using Hadoop."
]
}
|
1402.7063
|
1770242755
|
A @math -nearest neighbor ( @math NN) query determines the @math nearest points, using distance metrics, from a specific location. An all @math -nearest neighbor (A @math NN) query constitutes a variation of a @math NN query and retrieves the @math nearest points for each point inside a database. Their main usage resonates in spatial databases and they consist the backbone of many location-based applications and not only (i.e. @math NN joins in databases, classification in data mining). So, it is very crucial to develop methods that answer them efficiently. In this work, we propose a novel method for classifying multidimensional data using an A @math NN algorithm in the MapReduce framework. Our approach exploits space decomposition techniques for processing the classification procedure in a parallel and distributed manner. To our knowledge, we are the first to study the classification of multidimensional objects under this perspective. Through an extensive experimental evaluation we prove that our solution is efficient and scalable in processing the given queries. We investigate many different perspectives that can affect the total computational cost, such as different dataset distributions, number of dimensions, growth of @math value and granularity of space decomposition and prove that our system is efficient, robust and scalable.
|
In @cite_9 a minimum spanning tree based classification model is introduced and it can be viewed as an intermediate model between the traditional @math -nearest neighbor method and cluster based classification method. Another approach presented in @cite_2 recommends parallel implementation methods of several classification algorithms, including @math -nearest neighbor, bayesian model, decision tree, but does not contemplate the nor the perspective of dimensionality nor parameter @math .
|
{
"cite_N": [
"@cite_9",
"@cite_2"
],
"mid": [
"2097568515",
"1834667845"
],
"abstract": [
"Rapid growth of data has provided us with more information, yet challenges the tradition techniques to extract the useful knowledge. In this paper, we propose MCMM, a Minimum spanning tree (MST) based Classification model for Massive data with MapReduce implementation. It can be viewed as an intermediate model between the traditional K nearest neighbor method and cluster based classification method, aiming to overcome their disadvantages and cope with large amount of data. Our model is implemented on Hadoop platform, using its MapReduce programming framework, which is particular suitable for cloud computing. We have done experiments on several data sets including real world data from UCI repository and synthetic data, using Downing 4000 clusters, installed with Hadoop. The results show that our model outperforms KNN and some other classification methods on a general basis with respect to accuracy and scalability.",
"Data mining has attracted extensive research for several decades. As an important task of data mining, classification plays an important role in information retrieval, web searching, CRM, etc. Most of the present classification techniques are serial, which become impractical for large dataset. The computing resource is under-utilized and the executing time is not waitable. Provided the program mode of MapReduce, we propose the parallel implementation methods of several classification algorithms, such as k-nearest neighbors, naive bayesian model and decision tree, etc. Preparatory experiments show that the proposed parallel methods can not only process large dataset, but also can be extended to execute on a cluster, which can significantly improve the efficiency."
]
}
|
1402.6932
|
2951935280
|
A simple and inexpensive (low-power and low-bandwidth) modification is made to a conventional off-the-shelf color video camera, from which we recover multiple color frames for each of the original measured frames, and each of the recovered frames can be focused at a different depth. The recovery of multiple frames for each measured frame is made possible via high-speed coding, manifested via translation of a single coded aperture; the inexpensive translation is constituted by mounting the binary code on a piezoelectric device. To simultaneously recover depth information, a liquid lens is modulated at high speed, via a variable voltage. Consequently, during the aforementioned coding process, the liquid lens allows the camera to sweep the focus through multiple depths. In addition to designing and implementing the camera, fast recovery is achieved by an anytime algorithm exploiting the group-sparsity of wavelet DCT coefficients.
|
Video compressive sensing has been investigated in @cite_2 @cite_20 @cite_5 @cite_19 @cite_14 , by capturing low frame-rate video to reconstruct high frame-rate video. The LCoS used in @cite_2 @cite_19 can modulate as fast as @math fps by pre-storing the exposure codes, but, because the coding pattern is continuously changed at each pixel throughout the exposure, it requires considerable energy consumption ( @math ) and bandwidth compared with the proposed modulation, in which a single mask is translated using a pizeoelectronic translator (requiring @math ). Similar coding was used in @cite_5 . However, we investigate color video here, and thus demosaicing is needed; because of the R, G and B channels, we need to properly align (in hardware of course) the mask more accurately compared with the monochromatic video in @cite_5 . Therefore, @cite_5 can be seen as a special case of the proposed camera. Furthermore, we also extract the depth information from the defocus phenomenon of the reconstructed frames, which has not been considered in the above papers.
|
{
"cite_N": [
"@cite_14",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_20"
],
"mid": [
"2135155393",
"2092680585",
"2151364185",
"1986701690",
"2096994349"
],
"abstract": [
"We show that, via temporal modulation, one can observe and capture a high-speed periodic video well beyond the abilities of a low-frame-rate camera. By strobing the exposure with unique sequences within the integration time of each frame, we take coded projections of dynamic events. From a sequence of such frames, we reconstruct a high-speed video of the high-frequency periodic process. Strobing is used in entertainment, medical imaging, and industrial inspection to generate lower beat frequencies. But this is limited to scenes with a detectable single dominant frequency and requires high-intensity lighting. In this paper, we address the problem of sub-Nyquist sampling of periodic signals and show designs to capture and reconstruct such signals. The key result is that for such signals, the Nyquist rate constraint can be imposed on the strobe rate rather than the sensor rate. The technique is based on intentional aliasing of the frequency components of the periodic signal while the reconstruction algorithm exploits recent advances in sparse representations and compressive sensing. We exploit the sparsity of periodic signals in the Fourier domain to develop reconstruction algorithms that are inspired by compressive sensing.",
"We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the camera frame-rate. The observed intensity at a pixel is an integration of the incoming light modulated by its specific shutter. We propose a reconstruction algorithm that uses the data from P2C2 along with additional priors about videos to perform temporal super-resolution. We model the spatial redundancy of videos using sparse representations and the temporal redundancy using brightness constancy constraints inferred via optical flow. We show that by modeling such spatio-temporal redundancies in a video volume, one can faithfully recover the underlying high-speed video frames from the observed low speed coded video. The imaging architecture and the reconstruction algorithm allows us to achieve temporal super-resolution without loss in spatial resolution. We implement a prototype of P2C2 using an LCOS modulator and recover several videos at 200 fps using a 25 fps camera.",
"Cameras face a fundamental tradeoff between the spatial and temporal resolution - digital still cameras can capture images with high spatial resolution, but most high-speed video cameras suffer from low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing and reconstructing the space-time volume in order to overcome this tradeoff. Our approach has two important distinctions compared to previous works: (1) we achieve sparse representation of videos by learning an over-complete dictionary on video patches, and (2) we adhere to practical constraints on sampling scheme which is imposed by architectures of present image sensor devices. Consequently, our sampling scheme can be implemented on image sensors by making a straightforward modification to the control unit. To demonstrate the power of our approach, we have implemented a prototype imaging system with per-pixel coded exposure control using a liquid crystal on silicon (LCoS) device. Using both simulations and experiments on a wide range of scenes, we show that our method can effectively reconstruct a video from a single image maintaining high spatial resolution.",
"We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video’s temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.",
"Video cameras are invariably bandwidth limited and this results in a trade-off between spatial and temporal resolution. Advances in sensor manufacturing technology have tremendously increased the available spatial resolution of modern cameras while simultaneously lowering the costs of these sensors. In stark contrast, hardware improvements in temporal resolution have been modest. One solution to enhance temporal resolution is to use high bandwidth imaging devices such as high speed sensors and camera arrays. Unfortunately, these solutions are expensive. An alternate solution is motivated by recent advances in computational imaging and compressive sensing. Camera designs based on these principles, typically, modulate the incoming video using spatio-temporal light modulators and capture the modulated video at a lower bandwidth. Reconstruction algorithms, motivated by compressive sensing, are subsequently used to recover the high bandwidth video at high fidelity. Though promising, these methods have been limited since they require complex and expensive light modulators that make the techniques difficult to realize in practice. In this paper, we show that a simple coded exposure modulation is sufficient to reconstruct high speed videos. We propose the Flutter Shutter Video Camera (FSVC) in which each exposure of the sensor is temporally coded using an independent pseudo-random sequence. Such exposure coding is easily achieved in modern sensors and is already a feature of several machine vision cameras. We also develop two algorithms for reconstructing the high speed video; the first based on minimizing the total variation of the spatio-temporal slices of the video and the second based on a data driven dictionary based approximation. We perform evaluation on simulated videos and real data to illustrate the robustness of our system."
]
}
|
1402.6932
|
2951935280
|
A simple and inexpensive (low-power and low-bandwidth) modification is made to a conventional off-the-shelf color video camera, from which we recover multiple color frames for each of the original measured frames, and each of the recovered frames can be focused at a different depth. The recovery of multiple frames for each measured frame is made possible via high-speed coding, manifested via translation of a single coded aperture; the inexpensive translation is constituted by mounting the binary code on a piezoelectric device. To simultaneously recover depth information, a liquid lens is modulated at high speed, via a variable voltage. Consequently, during the aforementioned coding process, the liquid lens allows the camera to sweep the focus through multiple depths. In addition to designing and implementing the camera, fast recovery is achieved by an anytime algorithm exploiting the group-sparsity of wavelet DCT coefficients.
|
Coded apertures have been used often in computational imaging for depth estimation @cite_7 @cite_23 @cite_0 . However, these only consider still images. From the algorithms investigated therein, one can get the depth map from a still image. In @cite_1 an imaging system was presented that enables one to control the depth of field by varying the position and or orientation of the image detector, during the integration time of a single photograph. However, moving the detector costs more energy than controlling the liquid lens in the proposed design (almost no power consumption), and the camera developed in @cite_1 can only provide a single all-in-focus image without the depth information. Furthermore, no motion information is considered in the above coded-aperture cameras, while here we consider video (allowing depth estimation on moving scenes).
|
{
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_23",
"@cite_7"
],
"mid": [
"1964313029",
"",
"2154571593",
"1535110911"
],
"abstract": [
"The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.",
"",
"A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.",
"Computational depth estimation is a central task in computer vision and graphics. A large variety of strategies have been introduced in the past relying on viewpoint variations, defocus changes and general aperture codes. However, the tradeoffs between such designs are not well understood. Depth estimation from computational camera measurements is a highly non-linear process and therefore most research attempts to evaluate depth estimation strategies rely on numerical simulations. Previous attempts to design computational cameras with good depth discrimination optimized highly non-linear and nonconvex scores, and hence it is not clear if the constructed designs are optimal. In this paper we address the problem of depth discrimination from J images captured using J arbitrary codes placed within one fixed lens aperture. We analyze the desired properties of discriminative codes under a geometric optics model and propose an upper bound on the best possible discrimination. We show that under a multiplicative noise model, the half ring codes discovered by [1] are near-optimal. When a large number of images are allowed, a multiaperture camera [2] dividing the aperture into multiple annular rings provides near-optimal discrimination. In contrast, the plenoptic camera of [5] which divides the aperture into compact support circles can achieve at most 50 of the optimal discrimination bound."
]
}
|
1402.6461
|
2095785650
|
Fault attacks against embedded circuits enabled to define many new attack paths against secure circuits. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. On embedded processors, a fault model consisting in an assembly instruction skip can be very useful for an attacker and has been obtained by using several fault injection means. To avoid this threat, some countermeasure schemes which rely on temporal redundancy have been proposed. Nevertheless, double fault injection in a long enough time interval is practical and can bypass those countermeasure schemes. Some fine-grained countermeasure schemes have also been proposed for specific instructions. However, to the best of our knowledge, no approach that enables to secure a generic assembly program in order to make it fault-tolerant to instruction skip attacks has been formally proven yet. In this paper, we provide a fault-tolerant replacement sequence for almost all the instructions of the Thumb-2 instruction set and provide a formal verification for this fault tolerance. This simple transformation enables to add a reasonably good security level to an embedded program and makes practical fault injection attacks much harder to achieve.
|
On embedded processors, a fault model in which an attacker can skip an assembly instruction or equivalently replace it by a nop has been observed on several architectures and for several fault injection means @cite_12 . On a 8-bit AVR microcontroller, Schmidt @cite_10 and Balasch @cite_17 obtained instruction skip effects by using clock glitches. Dehbaoui obtained the same kind of effects on another 8-bit AVR microcontroller by using electromagnetic glitches @cite_8 . On a 32-bit ARM9 processor, Barenghi obtained some instruction skip effects by using voltage glitches. On a more recent 32-bit ARM Cortex-M3 processor, Trichina were able to perform instruction skips by using laser shots @cite_2 . Moreover, this fault model has also been used as a basis for several cryptanalytic attacks @cite_18 . As a consequence, it is considered as a common fault model an attacker may be able to perform @cite_12 .
|
{
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_2",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2085992264",
"2039845393",
"2030986741",
"2000325148",
"2072402822",
"2000470949"
],
"abstract": [
"Implementations of cryptographic algorithms continue to proliferate in consumer products due to the increasing demand for secure transmission of confidential information. Although the current standard cryptographic algorithms proved to withstand exhaustive attacks, their hardware and software implementations have exhibited vulnerabilities to side channel attacks, e.g., power analysis and fault injection attacks. This paper focuses on fault injection attacks that have been shown to require inexpensive equipment and a short amount of time. The paper provides a comprehensive description of these attacks on cryptographic devices and the countermeasures that have been developed against them. After a brief review of the widely used cryptographic algorithms, we classify the currently known fault injection attacks into low-cost ones (which a single attacker with a modest budget can mount) and high-cost ones (requiring highly skilled attackers with a large budget). We then list the attacks that have been developed for the important and commonly used ciphers and indicate which ones have been successfully used in practice. The known countermeasures against the previously described fault injection attacks are then presented, including intrusion detection and fault detection. We conclude the survey with a discussion on the interaction between fault injection attacks (and the corresponding countermeasures) and power analysis attacks.",
"This paper considers the use of electromagnetic pulses (EMP) to inject transient faults into the calculations of a hardware and a software AES. A pulse generator and a 500 um-diameter magnetic coil were used to inject the localized EMP disturbances without any physical contact with the target. EMP injections were performed against a software AES running on a CPU, and a hardware AES (with and without countermeasure) embedded in a FPGA. The purpose of this work was twofold: (a) reporting actual faults injection induced by EMPs in our targets and describing their main properties, (b) explaining the coupling mechanism between the antenna used to produce the EMP and the targeted circuit, which causes the faults. The obtained results revealed a localized effect of the EMP since the injected faults were found dependent on the spatial position of the antenna on top of the circuit's surface. The assumption that EMP faults are related to the violation of the target's timing constraints was also studied and ascertained thanks to the use of a countermeasure based on monitoring such timing violations.",
"Since the first publication of a successful practical two-fault attack on protected CRT-RSA surprisingly little attention was given by the research community to an ensuing new challenge. The reason for it seems to be two-fold. One is that generic higher order fault attacks are very difficult to model and thus finding robust countermeasures is also difficult. Another reason may be that the published experiment was carried out on an outdated 8 bit microcontroller and thus was not perceived as a serious threat to create a sense of urgency in addressing this new menace. In this paper we describe two-fault attacks on protected CRT-RSA implementations running on an advanced 32 bit ARM Cortex M3 core. To our knowledge, this is the first practical result of two fault laser attacks on a protected cryptographic application. Considering that laser attacks are much more accurate in targeting a particular variable, the significance of our result cannot be overlooked.",
"In order to provide security for a device, cryptographic algorithms are implemented on them. Even devices using a cryptographically secure algorithm may be vulnerable to implementation attacks like side channel analysis or fault attacks. Most fault attacks on RSA concentrate on the vulnerability of the Chinese Remainder Theorem to fault injections. A few other attacks on RSA which do not use this speed-up technique have been published. Nevertheless, these attacks require a quite precise fault injection like a bit flip or target a special operation without any possibility to check if the fault was injected in the intended way, like in safe-error attacks.In this paper we propose a new attack on square and multiply, based on a manipulation of the control flow. Furthermore, we show how to realize this attack in practice using non-invasive spike attacks and discuss impacts of different side channel analysis countermeasures on our attack. The attack was performed using low cost equipment.",
"Hardware designers invest a significant design effort when implementing computationally intensive cryptographic algorithms onto constrained embedded devices to match the computational demands of the algorithms with the stringent area, power, and energy budgets of the platforms. When it comes to designs that are employed in potential hostile environments, another challenge arises-the design has to be resistant against attacks based on the physical properties of the implementation, the so-called implementation attacks. This creates an extra design concern for a hardware designer. This paper gives an insight into the field of fault attacks and countermeasures to help the designer to protect the design against this type of implementation attacks. We analyze fault attacks from different aspects and expose the mechanisms they employ to reveal a secret parameter of a device. In addition, we classify the existing countermeasures and discuss their effectiveness and efficiency. The result of this paper is a guide for selecting a set of countermeasures, which provides a sufficient security level to meet the constraints of the embedded devices.",
"The literature about fault analysis typically describes fault injection mechanisms, e.g. glitches and lasers, and cryptanalytic techniques to exploit faults based on some assumed fault model. Our work narrows the gap between both topics. We thoroughly analyse how clock glitches affect a commercial low-cost processor by performing a large number of experiments on five devices. We observe that the effects of fault injection on two-stage pipeline devices are more complex than commonly reported in the literature. While injecting a fault is relatively easy, injecting an exploitable fault is hard. We further observe that the easiest to inject and reliable fault is to replace instructions, and that random faults do not occur. Finally we explain how typical fault attacks can be mounted on this device, and describe a new attack for which the fault injection is easy and the cryptanalysis trivial."
]
}
|
1402.6461
|
2095785650
|
Fault attacks against embedded circuits enabled to define many new attack paths against secure circuits. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. On embedded processors, a fault model consisting in an assembly instruction skip can be very useful for an attacker and has been obtained by using several fault injection means. To avoid this threat, some countermeasure schemes which rely on temporal redundancy have been proposed. Nevertheless, double fault injection in a long enough time interval is practical and can bypass those countermeasure schemes. Some fine-grained countermeasure schemes have also been proposed for specific instructions. However, to the best of our knowledge, no approach that enables to secure a generic assembly program in order to make it fault-tolerant to instruction skip attacks has been formally proven yet. In this paper, we provide a fault-tolerant replacement sequence for almost all the instructions of the Thumb-2 instruction set and provide a formal verification for this fault tolerance. This simple transformation enables to add a reasonably good security level to an embedded program and makes practical fault injection attacks much harder to achieve.
|
A more generic fault model is the instruction replacement model, in which nop replacements correspond to one possible case. In some previous experiments on an ARM Cortex-M3 processor by using electromagnetic glitches, we have observed a corruption of the instructions binary encodings during the bus transfers @cite_4 leading to such instruction replacements. Actually, instruction skips correspond to specific cases of instruction replacements: replacing an instruction by another one that does not affect any useful register has the same effect as a nop replacement and so is equivalent to an instruction skip. Many injection means enable to perform instruction replacement attacks @cite_4 @cite_17 @cite_1 . Nevertheless, even with very accurate fault injection means, being able to precisely control an instruction replacement is a very tough task and, to the best of our knowledge, no practical attack based on such a fault model has been published yet.
|
{
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_17"
],
"mid": [
"281052386",
"2078668570",
"2000470949"
],
"abstract": [
"The dependability of computing systems running cryptographic primitives is a critical factor for evaluating the practical security of any cryptographic scheme. Indeed, the observation of erroneous results produced by a computing device after the artificial injection of transient faults is one of the most effective side-channel attacks. This chapter reviews the (semi-)invasive fault injection techniques that have been successfully used to recover the secret parameters of a cryptographic component. Subsequently, a complete characterization of the fault model derived from the constant underfeeding of a general-purpose microprocessor is described, in order to infer how the faulty behavior causes exploitable software errors.",
"Injection of transient faults as a way to attack cryptographic implementations has been largely studied in the last decade. Several attacks that use electromagnetic fault injection against hardware or software architectures have already been presented. On micro controllers, electromagnetic fault injection has mostly been seen as a way to skip assembly instructions or subroutine calls. However, to the best of our knowledge, no precise study about the impact of an electromagnetic glitch fault injection on a micro controller has been proposed yet. The aim of this paper is twofold: providing a more in-depth study of the effects of electromagnetic glitch fault injection on a state-of-the-art micro controller and building an associated register-transfer level fault model.",
"The literature about fault analysis typically describes fault injection mechanisms, e.g. glitches and lasers, and cryptanalytic techniques to exploit faults based on some assumed fault model. Our work narrows the gap between both topics. We thoroughly analyse how clock glitches affect a commercial low-cost processor by performing a large number of experiments on five devices. We observe that the effects of fault injection on two-stage pipeline devices are more complex than commonly reported in the literature. While injecting a fault is relatively easy, injecting an exploitable fault is hard. We further observe that the easiest to inject and reliable fault is to replace instructions, and that random faults do not occur. Finally we explain how typical fault attacks can be mounted on this device, and describe a new attack for which the fault injection is easy and the cryptanalysis trivial."
]
}
|
1402.6461
|
2095785650
|
Fault attacks against embedded circuits enabled to define many new attack paths against secure circuits. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. On embedded processors, a fault model consisting in an assembly instruction skip can be very useful for an attacker and has been obtained by using several fault injection means. To avoid this threat, some countermeasure schemes which rely on temporal redundancy have been proposed. Nevertheless, double fault injection in a long enough time interval is practical and can bypass those countermeasure schemes. Some fine-grained countermeasure schemes have also been proposed for specific instructions. However, to the best of our knowledge, no approach that enables to secure a generic assembly program in order to make it fault-tolerant to instruction skip attacks has been formally proven yet. In this paper, we provide a fault-tolerant replacement sequence for almost all the instructions of the Thumb-2 instruction set and provide a formal verification for this fault tolerance. This simple transformation enables to add a reasonably good security level to an embedded program and makes practical fault injection attacks much harder to achieve.
|
Several countermeasures schemes have been defined to protect embedded processor architectures against specific fault models. At hardware level, many countermeasures have been proposed. As an example, Nguyen @cite_5 propose to use integrity checks to ensure that no instruction replacement took place.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2104971439"
],
"abstract": [
"To ensure the code integrity in secure embedded processors, most previous works focus on detecting attacks without paying their attention to recovery. This paper proposes a novel hardware recovery approach allowing the processor to resume the execution after detecting an attack. The experimental results demonstrate that our scheme introduces a very low impact on the performance while requiring a reasonable hardware overhead."
]
}
|
1402.6461
|
2095785650
|
Fault attacks against embedded circuits enabled to define many new attack paths against secure circuits. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. On embedded processors, a fault model consisting in an assembly instruction skip can be very useful for an attacker and has been obtained by using several fault injection means. To avoid this threat, some countermeasure schemes which rely on temporal redundancy have been proposed. Nevertheless, double fault injection in a long enough time interval is practical and can bypass those countermeasure schemes. Some fine-grained countermeasure schemes have also been proposed for specific instructions. However, to the best of our knowledge, no approach that enables to secure a generic assembly program in order to make it fault-tolerant to instruction skip attacks has been formally proven yet. In this paper, we provide a fault-tolerant replacement sequence for almost all the instructions of the Thumb-2 instruction set and provide a formal verification for this fault tolerance. This simple transformation enables to add a reasonably good security level to an embedded program and makes practical fault injection attacks much harder to achieve.
|
Software-only countermeasure schemes, which aim at protecting the assembly code, are more flexible and avoid any modification of the hardware. Against fault attacks, the most common software fault detection approach relies on function-level temporal redundancy @cite_6 . For example, this principle applied to a cryptographic implementation can be achieved by calling twice the same encryption algorithm on the same input and then comparing the outputs. For encryption algorithms, an alternative way is to call the deciphering algorithm on the output of an encryption and to compare its output with the initial input. These approaches enable fault detection and involves doubling the execution time of the algorithm. Triplication approaches with voting enabling fault tolerance at the price of tripling the execution time of the whole algorithm have also been proposed @cite_6 .
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2111725598"
],
"abstract": [
"The effect of faults on electronic systems has been studied since the 1970s when it was noticed that radioactive particles caused errors in chips. This led to further research on the effect of charged particles on silicon, motivated by the aerospace industry, which was becoming concerned about the effect of faults in airborne electronic systems. Since then various mechanisms for fault creation and propagation have been discovered and researched. This paper covers the various methods that can be used to induce faults in semiconductors and exploit such errors maliciously. Several examples of attacks stemming from the exploiting of faults are explained. Finally a series of countermeasures to thwart these attacks are described."
]
}
|
1402.6461
|
2095785650
|
Fault attacks against embedded circuits enabled to define many new attack paths against secure circuits. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. On embedded processors, a fault model consisting in an assembly instruction skip can be very useful for an attacker and has been obtained by using several fault injection means. To avoid this threat, some countermeasure schemes which rely on temporal redundancy have been proposed. Nevertheless, double fault injection in a long enough time interval is practical and can bypass those countermeasure schemes. Some fine-grained countermeasure schemes have also been proposed for specific instructions. However, to the best of our knowledge, no approach that enables to secure a generic assembly program in order to make it fault-tolerant to instruction skip attacks has been formally proven yet. In this paper, we provide a fault-tolerant replacement sequence for almost all the instructions of the Thumb-2 instruction set and provide a formal verification for this fault tolerance. This simple transformation enables to add a reasonably good security level to an embedded program and makes practical fault injection attacks much harder to achieve.
|
At algorithm level, in @cite_13 , Medwed propose a generic approach based on the use of specific algebraic structures named codes. Their approach enables to protect both the control and data flow. At assembly level, in @cite_19 , Barenghi propose three countermeasure schemes based on instruction duplication, instruction triplication and parity checking. Their approach ensures a fault detection for a small number of instructions against instruction skip or transient data corruption fault models. Our scheme enables a fault tolerance only against the instruction skip fault model but for almost all the instructions of the considered instruction set. Moreover, our countermeasure scheme has been formally proven fault tolerant.
|
{
"cite_N": [
"@cite_19",
"@cite_13"
],
"mid": [
"2114235603",
"2146070389"
],
"abstract": [
"In this paper we present software countermeasures specifically designed to counteract fault injection attacks during the execution of a software implementation of a cryptographic algorithm and analyze the efficiency of these countermeasures. We propose two approaches based on the insertion of redundant computations and checks, which in their general form are suitable for any cryptographic algorithm. In particular, we focus on selective instruction duplication to detect single errors, instruction triplication to support error correction, and parity checking to detect corruption of a stored value. We developed a framework to automatically add the desired countermeasure, and we support the possibility to apply the selected redundancy to either all the instructions of the cryptographic routine or restrict it to the most sensitive ones, such as table lookups and key fetching. Considering an ARM processor as a target platform and AES as a target algorithm, we evaluate the overhead of the proposed countermeasures while keeping the robustness of the implementation high enough to thwart most or all of the known fault attacks. Experimental results show that in the considered architecture, the solution with the smallest overhead is per-instruction selective doubling and checking, and that the instruction triplication scheme is a viable alternative if very high levels of injected fault resistance are required.",
"So far many software countermeasures against fault attacks have been proposed. However, most of them are tailored to a specific cryptographic algorithm or focus on securing the processed data only. In this work we present a generic and elegant approach by using a highly fault secure algebraic structure. This structure is compatible to finite fields and rings and preserves its error detection property throughout addition and multiplication. Additionally, we introduce a method to generate a fingerprint of the instruction sequence. Thus, it is possible to check the result for data corruption as well as for modifications in the program flow. This is even possible if the order of the instructions is randomized. Furthermore, the properties of the countermeasure allow the deployment of error detection as well as error diffusion. We point out that the overhead for the calculations and for the error checking within this structure is reasonable and that the transformations are efficient. In addition we discuss how our approach increases the security in various kinds of fault scenarios."
]
}
|
1402.6461
|
2095785650
|
Fault attacks against embedded circuits enabled to define many new attack paths against secure circuits. Every attack path relies on a specific fault model which defines the type of faults that the attacker can perform. On embedded processors, a fault model consisting in an assembly instruction skip can be very useful for an attacker and has been obtained by using several fault injection means. To avoid this threat, some countermeasure schemes which rely on temporal redundancy have been proposed. Nevertheless, double fault injection in a long enough time interval is practical and can bypass those countermeasure schemes. Some fine-grained countermeasure schemes have also been proposed for specific instructions. However, to the best of our knowledge, no approach that enables to secure a generic assembly program in order to make it fault-tolerant to instruction skip attacks has been formally proven yet. In this paper, we provide a fault-tolerant replacement sequence for almost all the instructions of the Thumb-2 instruction set and provide a formal verification for this fault tolerance. This simple transformation enables to add a reasonably good security level to an embedded program and makes practical fault injection attacks much harder to achieve.
|
Formal methods and formal verification tools have been used for cryptographic protocols' verification of to check that an implementation could meet the Common Criteria security specifications @cite_9 . However, to the best of our knowledge, very few formal verification approaches to check the correctness of software countermeasure schemes against fault attacks have been proposed yet. One of the most significant contributions has been proposed by Christofi @cite_16 . Their approach aims at performing a source code level verification of the effectiveness of a countermeasure scheme on a CRT-RSA implementation by using the Frama-C program analyzer. In this paper, we formally prove all our proposed countermeasures against an instruction skip fault model at assembly level. Another more recent contribution of a formal methodology at algorithm level has been proposed by Rauzy @cite_21 . In their scheme, an attacker can induce faults in the data flow of a target implementation described in a high-level language. This scheme enables them to detect unnecessary countermeasures or possible flaws on several CRT-RSA implementations.
|
{
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_21"
],
"mid": [
"1889965583",
"2092627953",
"2106187462"
],
"abstract": [
"This paper presents an effective use of formal methods for the development and for the security certification of smart card software. The approach is based on the Common Criteria's methodology that requires the use of formal methods to prove that a product implements the claimed security level. This work led to the world-first certification of a commercial Java CardTMproduct involving all formal assurances needed to reach the highest security level. For this certification, formal methods have been used for the design and the implementation of the security functions of the Java Card system embedded in the product. We describe the refinement scheme used to meet the Common Criteria's requirements on formal models and proofs. In particular, we show how to build the proof that the implementation ensures the security objectives claimed in the security specification. We also provide some lessons learned from this important application of formal methods to the smart cards industry.",
"Cryptosystems are highly sensitive to physical attacks, which lead security developers to design more and more complex countermeasures. Nonetheless, no proof of flaw absence has been given for any implementation of these countermeasures. This paper aims to formally verify an implementation of one published countermeasure against fault injection attacks. More precisely, the formal verification concerns Vigilant’s CRT-RSA countermeasure which is designed to sufficiently protect CRT-RSA implementations against fault attacks. The goal is to formally verify whether any possible fault injection threatening the pseudo-code is detected by the countermeasure according to a predefined attack model.",
"In this article, we describe a methodology that aims at either breaking or proving the security of CRT-RSA implementations against fault injection attacks. In the specific case-study of the BellCoRe attack, our work bridges a gap between formal proofs and implementation-level attacks. We apply our results to three implementations of CRT-RSA, namely the unprotected one, that of Shamir, and that of Our findings are that many attacks are possible on both the unprotected and the Shamir implementations, while the implementation of is resistant to all single-fault attacks. It is also resistant to double-fault attacks if we consider the less powerful threat model of its authors."
]
}
|
1402.6077
|
1540130502
|
Recent years have seen a surge of interest in Probabilistic Logic Programming (PLP) and Statistical Relational Learning (SRL) models that combine logic with probabilities. Structure learning of these systems is an intersection area of Inductive Logic Programming (ILP) and statistical learning (SL). However, ILP cannot deal with probabilities, SL cannot model relational hypothesis. The biggest challenge of integrating these two machine learning frameworks is how to estimate the probability of a logic clause only from the observation of grounded logic atoms. Many current methods models a joint probability by representing clause as graphical model and literals as vertices in it. This model is still too complicate and only can be approximate by pseudo-likelihood. We propose Inductive Logic Boosting framework to transform the relational dataset into a feature-based dataset, induces logic rules by boosting Problog Rule Trees and relaxes the independence constraint of pseudo-likelihood. Experimental evaluation on benchmark datasets demonstrates that the AUC-PR and AUC-ROC value of ILP learned rules are higher than current state-of-the-art SRL methods.
|
Most of current systems integrates ILP and statistical learning by expressing first-order logic as probabilistic graphic models and then learn the parameters on the graph models. They search structures (candidate clauses) first, then learns the parameters (weights) and modify the structures (clauses) accordingly. This kind of approaches performs either top-down @cite_17 or bottom-up searches @cite_18 . There are also works learns PLP by beam search or approximate search in the space of probabilistic clauses @cite_19 @cite_6 .
|
{
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_6",
"@cite_17"
],
"mid": [
"1200217156",
"2144429462",
"",
"2121075864"
],
"abstract": [
"Probabilistic logic programming can be used to model domains with complex and uncertain relationships among entities. While the problem of learning the parameters of such programs has been considered by various authors, the problem of learning the structure is yet to be explored in depth. In this work we present an approximate search method based on a one-player game approach, called LEMUR. It sees the problem of learning the structure of a probabilistic logic program as a multi-armed bandit problem, relying on the Monte-Carlo tree search UCT algorithm that combines the precision of tree search with the generality of random sampling. LEMUR works by modifying the UCT algorithm in a fashion similar to FUSE, that considers a finite unknown horizon and deals with the problem of having a huge branching factor. The proposed system has been tested on various real-world datasets and has shown good performance with respect to other state of the art statistical relational learning approaches in terms of classification abilities.",
"Markov logic networks (MLNs) are a statistical relational model that consists of weighted firstorder clauses and generalizes first-order logic and Markov networks. The current state-of-the-art algorithm for learning MLN structure follows a top-down paradigm where many potential candidate structures are systematically generated without considering the data and then evaluated using a statistical measure of their fit to the data. Even though this existing algorithm outperforms an impressive array of benchmarks, its greedy search is susceptible to local maxima or plateaus. We present a novel algorithm for learning MLN structure that follows a more bottom-up approach to address this problem. Our algorithm uses a \"propositional\" Markov network learning method to construct \"template\" networks that guide the construction of candidate clauses. Our algorithm significantly improves accuracy and learning time over the existing topdown approach in three real-world domains.",
"",
"Markov logic networks (MLNs) combine logic and probability by attaching weights to first-order clauses, and viewing these as templates for features of Markov networks. In this paper we develop an algorithm for learning the structure of MLNs from relational databases, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks. The algorithm performs a beam or shortest-first search of the space of clauses, guided by a weighted pseudo-likelihood measure. This requires computing the optimal weights for each candidate structure, but we show how this can be done efficiently. The algorithm can be used to learn an MLN from scratch, or to refine an existing knowledge base. We have applied it in two real-world domains, and found that it outperforms using off-the-shelf ILP systems to learn the MLN structure, as well as pure ILP, purely probabilistic and purely knowledge-based approaches."
]
}
|
1402.6077
|
1540130502
|
Recent years have seen a surge of interest in Probabilistic Logic Programming (PLP) and Statistical Relational Learning (SRL) models that combine logic with probabilities. Structure learning of these systems is an intersection area of Inductive Logic Programming (ILP) and statistical learning (SL). However, ILP cannot deal with probabilities, SL cannot model relational hypothesis. The biggest challenge of integrating these two machine learning frameworks is how to estimate the probability of a logic clause only from the observation of grounded logic atoms. Many current methods models a joint probability by representing clause as graphical model and literals as vertices in it. This model is still too complicate and only can be approximate by pseudo-likelihood. We propose Inductive Logic Boosting framework to transform the relational dataset into a feature-based dataset, induces logic rules by boosting Problog Rule Trees and relaxes the independence constraint of pseudo-likelihood. Experimental evaluation on benchmark datasets demonstrates that the AUC-PR and AUC-ROC value of ILP learned rules are higher than current state-of-the-art SRL methods.
|
There are also some methods combines ILP with SL by boosting. For example, Boosting FFOIL @cite_10 directly adopts the boosting framework with a classical ILP system, FFOIL, as weak learners, it proves that boosting is beneficial for first-order induction. More recently, RDN-Boost @cite_15 and MLN-Boost @cite_16 turns the problem into relational regression problems and learns both structures and weights of graphical model simultaneously.
|
{
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"2150475393",
"2021602734",
""
],
"abstract": [
"Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.",
"Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent example is Markov Logic Networks (MLNs). While MLNs are indeed highly expressive, this expressiveness comes at a cost. Learning MLNs is a hard problem and therefore has attracted much interest in the SRL community. Current methods for learning MLNs follow a two-step approach: first, perform a search through the space of possible clauses and then learn appropriate weights for these clauses. We propose to take a different approach, namely to learn both the weights and the structure of the MLN simultaneously. Our approach is based on functional gradient boosting where the problem of learning MLNs is turned into a series of relational functional approximation problems. We use two kinds of representations for the gradients: clause-based and tree-based. Our experimental evaluation on several benchmark data sets demonstrates that our new approach can learn MLNs as good or better than those found with state-of-the-art methods, but often in a fraction of the time.",
""
]
}
|
1402.5138
|
2033969525
|
Map construction methods automatically produce and or update street map datasets using vehicle tracking data. Enabled by the ubiquitous generation of geo-referenced tracking data, there has been a recent surge in map construction algorithms coming from different computer science domains. A cross-comparison of the various algorithms is still very rare, since (i) algorithms and constructed maps are generally not publicly available and (ii) there is no standard approach to assess the result quality, given the lack of benchmark data and quantitative evaluation methods. This work represents a first comprehensive attempt to benchmark such map construction algorithms. We provide an evaluation and comparison of seven algorithms using four datasets and four different evaluation measures. In addition to this comprehensive comparison, we make our datasets, source code of map construction algorithms and evaluation measures publicly available on http: mapconstruction.org. . This site has been established as a repository for map construction data and algorithms and we invite other researchers to contribute by uploading code and benchmark data supporting their contributions to map construction algorithms.
|
There exist several different approaches in the literature for constructing street maps from tracking data. These can be organized into the following categories: Point clustering (this includes @math -means algorithms and Kernel Density Estimation (KDE) as described in Biagioni and Eriksson @cite_45 ), incremental track insertion, and intersection linking.
|
{
"cite_N": [
"@cite_45"
],
"mid": [
"2071091794"
],
"abstract": [
"This paper describes a process for automatically inferring maps from large collections of opportunistically collected GPS traces. In this type of dataset, there is often a great disparity in terms of coverage. For example, a freeway may be represented by thousands of trips, whereas a residential road may only have a handful of observations. Additionally, while modern GPS receivers typically produce high-quality location estimates, errors over 100 meters are not uncommon, especially near tall buildings or under dense tree coverage. Combined, GPS trace disparity and error present a formidable challenge for the current state of the art in map inference. By tuning the parameters of existing algorithms, a user may choose to remove spurious roads created by GPS noise, or admit less-frequently traveled roads, but not both. In this paper, we present an extensible map inference pipeline, designed to mitigate GPS error, admit less-frequently traveled roads, and scale to large datasets. We demonstrate and compare the performance of our proposed pipeline against existing methods, both qualitatively and quantitatively, using a real-world dataset that includes both high disparity and noise. Our results show significant improvements over the current state of the art."
]
}
|
1402.5138
|
2033969525
|
Map construction methods automatically produce and or update street map datasets using vehicle tracking data. Enabled by the ubiquitous generation of geo-referenced tracking data, there has been a recent surge in map construction algorithms coming from different computer science domains. A cross-comparison of the various algorithms is still very rare, since (i) algorithms and constructed maps are generally not publicly available and (ii) there is no standard approach to assess the result quality, given the lack of benchmark data and quantitative evaluation methods. This work represents a first comprehensive attempt to benchmark such map construction algorithms. We provide an evaluation and comparison of seven algorithms using four datasets and four different evaluation measures. In addition to this comprehensive comparison, we make our datasets, source code of map construction algorithms and evaluation measures publicly available on http: mapconstruction.org. . This site has been established as a repository for map construction data and algorithms and we invite other researchers to contribute by uploading code and benchmark data supporting their contributions to map construction algorithms.
|
In the graph theory literature, there are various distance measures for comparing two abstract graphs, that do not necessarily have a geometric embedding @cite_11 @cite_48 @cite_37 . Most closely related to street map comparison are the subgraph isomorphism problem and the maximum common isomorphic subgraph problem, both of which are NP-complete. These, however, rely on one-to-one mappings of graphs or subgraphs, and they do not take any geometric embedding into account. Graph edit distance @cite_28 @cite_15 is a way to allow noise by seeking a sequence of edit operations to transform one graph into the other, however it is NP-hard as well. @cite_52 consider a graph edit distance for geometric graphs (embedded in two different coordinate systems, however), and also show that it is NP-hard to compute.
|
{
"cite_N": [
"@cite_37",
"@cite_28",
"@cite_48",
"@cite_52",
"@cite_15",
"@cite_11"
],
"mid": [
"2073067110",
"1983681808",
"2051650468",
"1512703349",
"2032338144",
"2109294083"
],
"abstract": [
"The graph isomorphism problem—to devise a good algorithm for determining if two graphs are isomorphic—is of considerable practical importance, and is also of theoretical interest due to its relationship to the concept of NP-completeness. No efficient (i.e., polynomial-bound) algorithm for graph isomorphism is known, and it has been conjectured that no such algorithm can exist. Many papers on the subject have appeared, but progress has been slight; in fact, the intractable nature of the problem and the way that many graph theorists have been led to devote much time to it, recall those aspects of the four-color conjecture which prompted Harary to rechristen it the “four-color disease.” This paper surveys the present state of the art of isomorphism testing, discusses its relationship to NP-completeness, and indicates some of the difficulties inherent in this particularly elusive and challenging problem. A comprehensive bibliography of papers relating to the graph isomorphism problem is given.",
"Inexact graph matching has been one of the significant research foci in the area of pattern analysis. As an important way to measure the similarity between pairwise graphs error-tolerantly, graph edit distance (GED) is the base of inexact graph matching. The research advance of GED is surveyed in order to provide a review of the existing literatures and offer some insights into the studies of GED. Since graphs may be attributed or non-attributed and the definition of costs for edit operations is various, the existing GED algorithms are categorized according to these two factors and described in detail. After these algorithms are analyzed and their limitations are identified, several promising directions for further research are proposed.",
"This annotated bibliography contains 32 papers on graph isomorphism not included in the bibliography of Read and Corneil, most of which discuss algorithms or their theoretical foundations. We include articles by Miller, Weisfeiler, and Whitney as well as algorithms by Johnson and Leighton, Schmidt and Druffel, Tinhofer, and Erdos and Babai.",
"What does it mean for two geometric graphs to be similar? We propose a distance for geometric graphs that we show to be a metric, and that can be computed by solving an integer linear program. We also present experiments using a heuristic distance function.",
"Graph data have become ubiquitous and manipulating them based on similarity is essential for many applications. Graph edit distance is one of the most widely accepted measures to determine similarities between graphs and has extensive applications in the fields of pattern recognition, computer vision etc. Unfortunately, the problem of graph edit distance computation is NP-Hard in general. Accordingly, in this paper we introduce three novel methods to compute the upper and lower bounds for the edit distance between two graphs in polynomial time. Applying these methods, two algorithms AppFull and AppSub are introduced to perform different kinds of graph search on graph databases. Comprehensive experimental studies are conducted on both real and synthetic datasets to examine various aspects of the methods for bounding graph edit distance. Result shows that these methods achieve good scalability in terms of both the number of graphs and the size of graphs. The effectiveness of these algorithms also confirms the usefulness of using our bounds in filtering and searching of graphs.",
"A recent paper posed the question: \"Graph Matching: What are we really talking about?\". Far from providing a definite answer to that question, in this paper we will try to characterize the role that graphs play within the Pattern Recognition field. To this aim two taxonomies are presented and discussed. The first includes almost all the graph matching algorithms proposed from the late seventies, and describes the different classes of algorithms. The second taxonomy considers the types of common applications of graph-based techniques in the Pattern Recognition and Machine Vision field."
]
}
|
1402.5138
|
2033969525
|
Map construction methods automatically produce and or update street map datasets using vehicle tracking data. Enabled by the ubiquitous generation of geo-referenced tracking data, there has been a recent surge in map construction algorithms coming from different computer science domains. A cross-comparison of the various algorithms is still very rare, since (i) algorithms and constructed maps are generally not publicly available and (ii) there is no standard approach to assess the result quality, given the lack of benchmark data and quantitative evaluation methods. This work represents a first comprehensive attempt to benchmark such map construction algorithms. We provide an evaluation and comparison of seven algorithms using four datasets and four different evaluation measures. In addition to this comprehensive comparison, we make our datasets, source code of map construction algorithms and evaluation measures publicly available on http: mapconstruction.org. . This site has been established as a repository for map construction data and algorithms and we invite other researchers to contribute by uploading code and benchmark data supporting their contributions to map construction algorithms.
|
For comparing street maps, distance measures based on and distance measures based on have been proposed. treat each graph as the set of points in the plane that is covered by all its vertices and edges. The idea is then to compute a distance between the two point sets. A straightforward distance measure for point sets are the directed and undirected Hausdorff distances @cite_42 . The main drawback of such an approach is that it does not use the topological structure of the graph. Biagioni and Eriksson @cite_20 @cite_8 , use two distance measures that essentially both use a variant of a partial one-to-one bottleneck matching that is based on sampling both graphs densely. The two distance measures compare the total number of matched sample points to the total number of sample points in the graph, thus providing a measure of how much of the graph has been matched. They do require though to have as input a ground-truth graph that closely resembles the underlying map and not a superset.
|
{
"cite_N": [
"@cite_42",
"@cite_20",
"@cite_8"
],
"mid": [
"2768974198",
"2033815587",
"2122967165"
],
"abstract": [
"In this chapter we survey geometric techniques which have been used to measure the similarity or distance between shapes, as well as to approximate shapes, or interpolate between shapes. Shape is a modality which plays a key role in many disciplines, ranging from computer vision to molecular biology. We focus on algorithmic techniques based on computational geometry that have been developed for shape matching, simplification, and morphing.",
"As a result of the availability of Global Positioning System (GPS) sensors in a variety of everyday devices, GPS trace data are becoming increasingly abundant. One potential use of this wealth of data is to infer and update the geometry and connectivity of road maps through the use of what are known as map generation or map inference algorithms. These algorithms offer a tremendous advantage when no existing road map data are present. Instead of the expense of a complete road survey, GPS trace data can be used to generate entirely new sections of the road map at a fraction of the cost. In cases of existing maps, road map inference may not only help to increase the accuracy of available road maps but may also help to detect new road construction and to make dynamic adaptions to road closures—useful features for in-car navigation with digital road maps. In past research, proposed algorithms had been evaluated qualitatively with little or no comparison with prior work. This lack of quantitative and comparativ...",
"We address the problem of inferring road maps from large-scale GPS traces that have relatively low resolution and sampling frequency. Unlike past published work that requires high-resolution traces with dense sampling, we focus on situations with coarse granularity data, such as that obtained from thousands of taxis in Shanghai, which transmit their location as seldom as once per minute. Such data sources can be made available inexpensively as byproducts of existing processes, rather than having to drive every road with high-quality GPS instrumentation just for map building - and having to re-drive roads for periodic updates. Although the challenges in using opportunistic probe data are significant, successful mining algorithms could potentially enable the creation of continuously updated maps at very low cost. In this paper, we compare representative algorithms from two approaches: working with individual reported locations vs. segments between consecutive locations. We assess their trade-offs and effectiveness in both qualitative and quantitative comparisons for regions of Shanghai and Chicago."
]
}
|
1402.5138
|
2033969525
|
Map construction methods automatically produce and or update street map datasets using vehicle tracking data. Enabled by the ubiquitous generation of geo-referenced tracking data, there has been a recent surge in map construction algorithms coming from different computer science domains. A cross-comparison of the various algorithms is still very rare, since (i) algorithms and constructed maps are generally not publicly available and (ii) there is no standard approach to assess the result quality, given the lack of benchmark data and quantitative evaluation methods. This work represents a first comprehensive attempt to benchmark such map construction algorithms. We provide an evaluation and comparison of seven algorithms using four datasets and four different evaluation measures. In addition to this comprehensive comparison, we make our datasets, source code of map construction algorithms and evaluation measures publicly available on http: mapconstruction.org. . This site has been established as a repository for map construction data and algorithms and we invite other researchers to contribute by uploading code and benchmark data supporting their contributions to map construction algorithms.
|
For on the other hand, the underlying idea is to represent the graphs by sets of paths, and then define a distance measure based on distances between the paths. This captures some of the topological information in the graphs, and paths are of importance for street maps in particular since the latter are often used for routing applications for which similar connectivity is desirable. Mondzech and Sester @cite_21 use shortest paths to compare the suitability of two road networks for pedestrian navigation by considering basic properties such as respective path length. Karagiorgou and Pfoser @cite_24 also use shortest paths, but to actually assess the similarity of road network graphs. Computing random sets of start and end nodes, the computed paths are compared using Discrete and the Average Vertical distance. Using those sets of distances, a global network similarity measure is derived. In another effort, Ahmed and Wenk @cite_7 cover the networks to be compared with paths of @math link-length and map-match the paths to the other graph using the . They are the first to introduce the concept local signature to identify how and where two graphs differ.
|
{
"cite_N": [
"@cite_24",
"@cite_21",
"@cite_7"
],
"mid": [
"2008725231",
"2099631107",
"1527105738"
],
"abstract": [
"Road networks are important datasets for an increasing number of applications. However, the creation and maintenance of such datasets pose interesting research challenges. This work proposes an automatic road network generation algorithm that takes vehicle tracking data in the form of trajectories as input and produces a road network graph. This effort addresses the challenges of evolving map data sets, specifically by focusing on (i) automatic map-attribute generation (weights), (ii) automatic road network generation, and (iii) by providing a quality assessment. An experimental study assesses the quality of the algorithms by generating a part of the road network of Athens, Greece, using trajectories derived from GPS tracking a school bus fleet.",
"Abstract More volunteered geographic information is becoming available; if this information is to be exploited, its quality must be known. This study evaluates the quality of OpenStreetMap data with respect to its “fitness for use” (i.e., its suitability for a certain application), specifically pedestrian navigation. The quality of the data is determined by comparing simulated routes on two networks; one data set is from OSM, and the other is the German topographic data set, ATKIS. Both accessibility and length of routes are used as quality criteria. The two data sets are tested using three different test scenarios in Germany.",
"Comparing two geometric graphs embedded in space is important in the field of transportation network analysis. Given street maps of the same city collected from different sources, researchers often need to know how and where they differ. However, the majority of current graph comparison algorithms are based on structural properties of graphs, such as their degree distribution or their local connectivity properties, and do not consider their spatial embedding. This ignores a key property of road networks since the similarity of travel over two road networks is intimately tied to the specific spatial embedding. Likewise, many current algorithms specific to street map comparison either do not provide quality guarantees or focus on spatial embeddings only. Motivated by road network comparison, we propose a new path-based distance measure between two planar geometric graphs that is based on comparing sets of travel paths generated over the graphs. Surprisingly, we are able to show that using paths of bounded link-length, we can capture global structural and spatial differences between the graphs. We show how to utilize our distance measure as a local signature in order to identify and visualize portions of high similarity in the maps. Finally, we present an experimental evaluation of our distance measure and its local signature on street map data from Berlin, Germany and Athens, Greece."
]
}
|
1402.5593
|
1536828078
|
This paper presents an analysis of data from a gift-exchange-game experiment. The experiment was described in The Impact of Social Comparisons on Reciprocity' by G " 2012. Since this paper uses state-of-art data science techniques, the results provide a different point of view on the problem. As already shown in relevant literature from experimental economics, human decisions deviate from rational payoff maximization. The average gift rate was @math . Gift rate was under no conditions zero. Further, we derive some special findings and calculate their significance.
|
A similar approach is already explored on three datasets -- a zero-sum game of mixed strategies, an ultimatum game and repeated social guessing game @cite_28 @cite_11 . For these datasets, extracted deterministic regularities outperformed state-of-art models. It was shown that some regularities can be easily verbalized, what underlines their plausibility. A very comprehensive gathering of works in experimental psychology and economics on human behavior in general games can be found in @cite_34 . Quantal response equilibrium became popular as a model for deviations from equilibria @cite_18 . It is a parametrized shift between mixed strategies equilibrium and an equal distribution. The basic idea for quantal response equilibrium is the concept of trembling hand -- people make mistakes with certain probability. Unfortunately, the Akaike information criterion @cite_8 is rarely calculated to judge the trade-off between fit quality and model complexity @cite_20 . Another popular model is the linear regression. It is used in the original paper to model the dataset @cite_29 . For linear regression, data is translated into real numbers.
|
{
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_34",
"@cite_20",
"@cite_11"
],
"mid": [
"2264897026",
"2058815839",
"",
"2105906421",
"1580723584",
"",
"1967445478"
],
"abstract": [
"We investigate the use of standard statistical models for quantal choice in a game theoretic setting. Players choose strategies based on relative expected utility and assume other players do so as well. We define a quantal response equilibrium (ORE) as a fixed point of this process and establish existence. For a logit specification of the error structure, we show that as the error goes to zero, QRE approaches a subset of Nash equilibria and also implies a unique selection from the set of Nash equilibria in generic games. We fit the model to a variety of experimental data sets by using maximum likelihood estimation. Journal of Economic Literature Classification Numbers: C19, C44, C72, C92.",
"In this paper it is shown that the classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion. This observation shows an extension of the principle to provide answers to many practical problems of statistical model fitting.",
"",
"We investigate the effects of pay comparison information (i.e. information about what coworkers earn) and effort comparison information (information about how co-workers perform) in experimental firms composed of one employer and two employees. Exposure to pay comparison information in isolation from effort comparison information does not appear to affect reciprocity toward employers: in this case own wage is a powerful determinant of own effort, but co-worker wages have no effect. By contrast, we find that exposure to both pieces of social information systematically influences employees’ reciprocity. A generous wage offer is virtually ineffective if an employee is matched with a lazy co-worker who is also paid generously: in such circumstances the employee tends to expend low effort irrespective of her own wage. Reciprocity is more pronounced when the co-worker is hard-working, as effort is strongly and positively related to own wage in this case. Reciprocity is also pronounced when the employer pays unequal wages to the employees: in this case the co-worker’s effort decision is disregarded and effort decisions are again strongly and positively related to own wage. On average exposure to social information weakens reciprocity, though we find substantial heterogeneity in responses across individuals, and find that sometimes social information has beneficial effects. We suggest that group composition may be an important tool for harnessing the positive effects of social comparison processes.",
"Markets market economics of uncertainty general equilibrium and the economics games mechanisms design and policy applications non-market and organizational research institutional choice and the evolution individual choice, beliefs and behaviour methods, classroom applications.",
"",
"In this paper, we want to introduce experimental economics to the field of data mining and vice versa. It continues related work on mining deterministic behavior rules of human subjects in data gathered from experiments. Game-theoretic predictions partially fail to work with this data. Equilibria also known as game-theoretic predictions solely succeed with experienced subjects in specific games - conditions, which are rarely given. Contemporary experimental economics offers a number of alternative models apart from game theory. In relevant literature, these models are always biased by philosophical plausibility considerations and are claimed to fit the data. An agnostic data mining approach to the problem is introduced in this paper - the philosophical plausibility considerations follow after the correlations are found. No other biases are regarded apart from determinism. The dataset of the paper Social Learning in Networks\" by 2012 is taken for evaluation. As a result, we come up with new findings. As future work, the design of a new infrastructure is discussed."
]
}
|
1402.5045
|
1766320474
|
The use of virtual agents in social coaching has increased rapidly in the last decade. In order to train the user in different situations than can occur in real life, the virtual agent should be able to express different social attitudes. In this paper, we propose a model of social attitudes that enables a virtual agent to reason on the appropriate social attitude to express during the interaction with a user given the course of the interaction, but also the emotions, mood and personality of the agent. Moreover, the model enables the virtual agent to display its social attitude through its non-verbal behaviour. The proposed model has been developed in the context of job interview simulation. The methodology used to develop such a model combined a theoretical and an empirical approach. Indeed, the model is based both on the literature in Human and Social Sciences on social attitudes but also on the analysis of an audiovisual corpus of job interviews and on post-hoc interviews with the recruiters on their expressed attitudes during the job interview.
|
Several research in Human and Social Sciences has shown that most modalities of the body are involved when conveying attitudes: smiles can be signs of friendliness @cite_30 , performing large gestures may be a sign of dominance, and a head directed upwards can be interpreted with a dominant attitude @cite_23 . However, an attitude is not solely displayed by a sign. It is important to consider the succession of signs displayed by the agent as well as the signs displayed by the interlocutors. it is also crucial to consider how the signs of both interlocutors relate to each others. For example, it is only by looking at the sequencing of smile, gaze and head aversion that we can differentiate between amusement, shame and embarrassment, affects expressing different values of dominance @cite_3 .
|
{
"cite_N": [
"@cite_30",
"@cite_3",
"@cite_23"
],
"mid": [
"1978650987",
"2132947339",
"2141608182"
],
"abstract": [
"Based on the assumptions that relational messages are multidimensional and that they are frequently communicated by nonverbal cues, this experiment manipulated five nonverbal cues -eye contact, proximity, body lean, smiling, and touch - to determine what meanings they convey along four relational message dimensions. Subjects (N= 150) observed 2 out of 40 videotaped conversational segments in which a male-female dyad presented various combinations of the nonverbal cues. High eye contact, close proximity, forward body lean, and smiling all conveyed greater intimacy, attraction, and trust. Low eye contact, a distal position, backward body lean, and the absence of smiling and touch communicated greater detachment. High eye contact, close proximity, and smiling also communicated less emotional arousal and greater composure, while high eye contact and close proximity alone conveyed greater dominance and control. Effects of combinations of cues and sex-differences are also reported.",
"According to appeasement hypotheses, embarrassment should have a distinct nonverbal display that is more readily perceived when displayed by individuals from lower status groups. The evidence from 5 studies supported these two claims. The nonverbal behavior of embarrassment was distinct from a related emotion (amusement), resembled the temporal pattern of facial expressions of emotion, was uniquely related to self-reports of embarrassment, and was accurately identified by observers who judged the spontaneous displays of various emotions. Across the judgment studies, observers were more accurate and attributed more emotion to the embarrassment displays of female and AfricanAmerican targets than those of male and Caucasian targets. Discussion focused on the universality and appeasement function of the embarrassment display. Since universal facial expressions of a limited set of emotions were first documented (Ekman & Friesen, 1971; Ekman, Sorenson, & Friesen, 1969; Izard, 1971), sparse attention has been given to facial expressions of other emotions. The resulting lacuna in the field—that the emotions with identified displays are fewer (7 to 10) than the states that lay people (Fehr & Russell, 1984) and emotion theorists (Ekman, 1992; Izard, 1977; Tomkins, 1963, 1984) label as emotions—presents intriguing possibilities. Displays of other emotions may be blends of other emotional displays, unidentifiable, or may await discovery.",
"In two vignette studies we examined beliefs about the nonverbal behavior and communication skills associated with high and low social power. Power was defined as both a trait (personality dominance) and a role (rank within an organization). Seventy nonverbal behaviors and skills were examined. Both Study 1 (a within-participants design) and Study 2 (a between-participants design) yielded highly similar results. Significant differences emerged for 35 of the 70 behaviors. The gender of the target individuals did not moderate beliefs about the relation of nonverbal behavior and power."
]
}
|
1402.5045
|
1766320474
|
The use of virtual agents in social coaching has increased rapidly in the last decade. In order to train the user in different situations than can occur in real life, the virtual agent should be able to express different social attitudes. In this paper, we propose a model of social attitudes that enables a virtual agent to reason on the appropriate social attitude to express during the interaction with a user given the course of the interaction, but also the emotions, mood and personality of the agent. Moreover, the model enables the virtual agent to display its social attitude through its non-verbal behaviour. The proposed model has been developed in the context of job interview simulation. The methodology used to develop such a model combined a theoretical and an empirical approach. Indeed, the model is based both on the literature in Human and Social Sciences on social attitudes but also on the analysis of an audiovisual corpus of job interviews and on post-hoc interviews with the recruiters on their expressed attitudes during the job interview.
|
Models of social attitude expression for virtual agents have already been proposed. For instance, in @cite_17 , postures corresponding to a given attitude were automatically generated for a dyad of agents. Ravenet @cite_8 proposed a user-created corpus-based methodology for choosing the behaviours of an agent conveying an attitude along with a communicative intention. The SEMAINE project used ECAs capable of mimicking and reacting to the system user's behaviour to convey a personality. Each of these works used either limited modalities, or a non-interactive (agent-agent) context. Also, none of these works looked at the sequencing of the agent's signal. In this article, we present model of social attitude expression that considers the sequencing of non-verbal behaviour (Section 6).
|
{
"cite_N": [
"@cite_8",
"@cite_17"
],
"mid": [
"121725841",
"1826625042"
],
"abstract": [
"Human’s non-verbal behavior may convey different meanings. They can reflect one’s emotional states, communicative intentions but also his social relations with someone else, i.e. his interpersonal attitude. In order to determine the non-verbal behavior that a virtual agent should display to convey particular interpersonal attitudes, we have collected a corpus of virtual agent’s non-verbal behavior directly created by users. Based on the analysis of the corpus, we propose a Bayesian model to automatically compute the virtual agent’s non-verbal behavior conveying interpersonal attitudes.",
"Computer generated characters are now commonplace in television and film. In some media productions like the Matrix™ they feature as frequently as the real cast. A visual media that is being explored by the research community is that of real-time improvisational theatre using virtual characters. This is a non-trivial problem with many research challenges; this paper starts to address one, which is the automatic generation of appropriate non-verbal communication between characters based on their personality and relationship to one another. We focus on our of model interpersonal attitude used for generating expressive postures and eye gaze in computer animated characters. Our model consists of two principle dimensions, affiliation and status. It takes into account the relationships between the attitudes of two characters and allows for a large degree of variation between characters, both in how they react to other characters’ behaviour and in the ways in which they express attitude."
]
}
|
1402.4437
|
2949514005
|
We present a new probabilistic model of compact commutative Lie groups that produces invariant-equivariant and disentangled representations of data. To define the notion of disentangling, we borrow a fundamental principle from physics that is used to derive the elementary particles of a system from its symmetries. Our model employs a newfound Bayesian conjugacy relation that enables fully tractable probabilistic inference over compact commutative Lie groups -- a class that includes the groups that describe the rotation and cyclic translation of images. We train the model on pairs of transformed image patches, and show that the learned invariant representation is highly effective for classification.
|
Other, non-group-theoretical approaches to learning transformations and invariant representations exist @cite_8 . These were found to perform a kind of joint eigenspace analysis @cite_0 , which is somewhat similar to the irreducible reduction of a toroidal group.
|
{
"cite_N": [
"@cite_0",
"@cite_8"
],
"mid": [
"2151024047",
"2136163184"
],
"abstract": [
"Sparse coding is a common approach to learning local features for object recognition. Recently, there has been an increasing interest in learning features from spatio-temporal, binocular, or other multi-observation data, where the goal is to encode the relationship between images rather than the content of a single image. We provide an analysis of multi-view feature learning, which shows that hidden variables encode transformations by detecting rotation angles in the eigenspaces shared among multiple image warps. Our analysis helps explain recent experimental results showing that transformation-specific features emerge when training complex cell models on videos. Our analysis also shows that transformation-invariant features can emerge as a by-product of learning representations of transformations.",
"To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans."
]
}
|
1402.4376
|
1509093258
|
We introduce a new notion of resilience for constraint satisfaction problems, with the goal of more precisely determining the boundary between NP-hardness and the existence of efficient algorithms for resilient instances. In particular, we study @math -resiliently @math -colorable graphs, which are those @math -colorable graphs that remain @math -colorable even after the addition of any @math new edges. We prove lower bounds on the NP-hardness of coloring resiliently colorable graphs, and provide an algorithm that colors sufficiently resilient graphs. We also analyze the corresponding notion of resilience for @math -SAT. This notion of resilience suggests an array of open questions for graph coloring and other combinatorial problems.
|
Most NP-hard problems have natural definitions of resiliency. For instance, resilient positive instances for optimization problems over graphs can be defined as those that remain positive instances even up to the addition or removal of any edge. For satisfiability, we say a resilient instance is one where variables can be fixed'' and the formula remains satisfiable. In problems like set-cover, we could allow for the removal of a given number of sets. Indeed, this can be seen as a general notion of resilience for adding constraints in constraint satisfaction problems (CSPs), which have an extensive literature @cite_1 . However, a resilience definition for general CSPs is not immediate because the ability to add any constraint (e.g., the negation of an existing constraint) is too strong.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2148546044"
],
"abstract": [
"A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, the planning of genetic experiments, and the satisfiability problem. A number of different approaches have been developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a combination of these two techniques. This article overviews many of these approaches in a tutorial fashion."
]
}
|
1402.4376
|
1509093258
|
We introduce a new notion of resilience for constraint satisfaction problems, with the goal of more precisely determining the boundary between NP-hardness and the existence of efficient algorithms for resilient instances. In particular, we study @math -resiliently @math -colorable graphs, which are those @math -colorable graphs that remain @math -colorable even after the addition of any @math new edges. We prove lower bounds on the NP-hardness of coloring resiliently colorable graphs, and provide an algorithm that colors sufficiently resilient graphs. We also analyze the corresponding notion of resilience for @math -SAT. This notion of resilience suggests an array of open questions for graph coloring and other combinatorial problems.
|
There are related concepts of resilience in the literature. Perhaps the closest in spirit is Bilu and Linial's notion of stability @cite_11 . Their notion is restricted to problems over metric spaces; they argue that practical instances often exhibit some degree of stability, which can make the problem easier. Their results on clustering stable instances have seen considerable interest and have been substantially extended and improved @cite_5 @cite_11 @cite_15 . Moreover, one can study TSP and other optimization problems over metrics under the Bilu-Linial assumption @cite_8 . A related notion of stability by Ackerman and Ben-David @cite_13 for clustering yields efficient algorithms when the data lies in Euclidian space.
|
{
"cite_N": [
"@cite_8",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"1503339058",
"1558625102",
"2950844185",
"2162148987",
""
],
"abstract": [
"We consider the metric Traveling Salesman Problem (Δ-TSP for short) and study how stability (as defined by Bilu and Linial [3]) influences the complexity of the problem. On an intuitive level, an instance of Δ-TSP is γ-stable (γ > 1), if there is a unique optimum Hamiltonian tour and any perturbation of arbitrary edge weights by at most γ does not change the edge set of the optimal solution (i.e., there is a significant gap between the optimum tour and all other tours). We show that for γ ≥ 1.8 a simple greedy algorithm (resembling Prim's algorithm for constructing a minimum spanning tree) computes the optimum Hamiltonian tour for every γ-stable instance of the Δ-TSP, whereas a simple local search algorithm can fail to find the optimum even if γ is arbitrary. We further show that there are γ-stable instances of Δ-TSP for every 1 < γ < 2. These results provide a different view on the hardness of the Δ-TSP and give rise to a new class of problem instances which are substantially easier to solve than instances of the general Δ-TSP.",
"Clustering under most popular objective functions is NP-hard, even to approximate well, and so unlikely to be efficiently solvable in the worst case. Recently, Bilu and Linial (2010) [11] suggested an approach aimed at bypassing this computational barrier by using properties of instances one might hope to hold in practice. In particular, they argue that instances in practice should be stable to small perturbations in the metric space and give an efficient algorithm for clustering instances of the Max-Cut problem that are stable to perturbations of size O(n^1^ ^2). In addition, they conjecture that instances stable to as little as O(1) perturbations should be solvable in polynomial time. In this paper we prove that this conjecture is true for any center-based clustering objective (such as k-median, k-means, and k-center). Specifically, we show we can efficiently find the optimal clustering assuming only stability to factor-3 perturbations of the underlying metric in spaces without Steiner points, and stability to factor 2+3 perturbations for general metrics. In particular, we show for such instances that the popular Single-Linkage algorithm combined with dynamic programming will find the optimal clustering. We also present NP-hardness results under a weaker but related condition.",
"We consider the model introduced by Bilu and Linial (2010), who study problems for which the optimal clustering does not change when distances are perturbed. They show that even when a problem is NP-hard, it is sometimes possible to obtain efficient algorithms for instances resilient to certain multiplicative perturbations, e.g. on the order of @math for max-cut clustering. (2010) consider center-based objectives, and Balcan and Liang (2011) analyze the @math -median and min-sum objectives, giving efficient algorithms for instances resilient to certain constant multiplicative perturbations. Here, we are motivated by the question of to what extent these assumptions can be relaxed while allowing for efficient algorithms. We show there is little room to improve these results by giving NP-hardness lower bounds for both the @math -median and min-sum objectives. On the other hand, we show that constant multiplicative resilience parameters can be so strong as to make the clustering problem trivial, leaving only a narrow range of resilience parameters for which clustering is interesting. We also consider a model of additive perturbations and give a correspondence between additive and multiplicative notions of stability. Our results provide a close examination of the consequences of assuming stability in data.",
"We investigate measures of the clusterability of data sets. Namely, ways to dene how ‘strong’ or ‘conclusive’ is the clustering structure of a given data set. We address this issue with generality, aiming for conclusions that apply regardless of any particular clustering algorithm or any specic data generation model. We survey several notions of clusterability that have been discussed in the literature, as well as propose a new notion of data clusterability. Our comparison of these notions reveals that, although they all attempt to evaluate the same intuitive property, they are pairwise inconsistent. Our analysis discovers an interesting phenomenon; Although most of the common clustering tasks are NP-hard, nding a closeto-optimal clustering for well clusterable data sets is easy (computationally). We prove instances of this general claim with respect to the various clusterability notions that we discuss. Finally, we investigate how hard it is to determine the clusterability value of a given data set. In most cases, it turns out that this is an NP-hard problem.",
""
]
}
|
1402.4376
|
1509093258
|
We introduce a new notion of resilience for constraint satisfaction problems, with the goal of more precisely determining the boundary between NP-hardness and the existence of efficient algorithms for resilient instances. In particular, we study @math -resiliently @math -colorable graphs, which are those @math -colorable graphs that remain @math -colorable even after the addition of any @math new edges. We prove lower bounds on the NP-hardness of coloring resiliently colorable graphs, and provide an algorithm that colors sufficiently resilient graphs. We also analyze the corresponding notion of resilience for @math -SAT. This notion of resilience suggests an array of open questions for graph coloring and other combinatorial problems.
|
As our main results are on graph colorability, we review the relevant past work. A graph @math is @math -colorable if there is an assignment of @math distinct colors to the vertices of @math so that no edge is monochromatic. Determining whether @math is @math -colorable is a classic an NP-hard problem @cite_14 . Many attempts to simplify the problem, such as assuming planarity or bounded degree, still result in NP-hardness @cite_26 . A large body of work surrounds positive and negative results for explicit families of graphs. The list of families that are polynomial-time colorable includes triangle-free planar graphs, perfect graphs and almost-perfect graphs, bounded tree- and clique-width graphs, quadtrees, and various families of graphs defined by the lack of an induced subgraph @cite_21 @cite_6 @cite_19 @cite_27 @cite_10 .
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_21",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_10"
],
"mid": [
"1975442866",
"1993454521",
"2130673833",
"2167757994",
"",
"",
"1882281809"
],
"abstract": [
"Throughout the 1960s I worked on combinatorial optimization problems including logic circuit design with Paul Roth and assembly line balancing and the traveling salesman problem with Mike Held. These experiences made me aware that seemingly simple discrete optimization problems could hold the seeds of combinatorial explosions. The work of Dantzig, Fulkerson, Hoffman, Edmonds, Lawler and other pioneers on network flows, matching and matroids acquainted me with the elegant and efficient algorithms that were sometimes possible. Jack Edmonds’ papers and a few key discussions with him drew my attention to the crucial distinction between polynomial-time and superpolynomial-time solvability. I was also influenced by Jack’s emphasis on min-max theorems as a tool for fast verification of optimal solutions, which foreshadowed Steve Cook’s definition of the complexity class NP. Another influence was George Dantzig’s suggestion that integer programming could serve as a universal format for combinatorial optimization problems.",
"It is shown that two sorts of problems belong to the NP-complete class. First, it is proven that for a given @k-colorable graph and a given @k-coloring of that graph, determining whether the graph is or is not uniquely @k-colorable is NP-complete. Second, a result by Garey, Johnson, and Stockmeyer is extended with a proof that the coloring of four-regular planar graphs is NP-complete.",
"For a family F of graphs and a nonnegative integer k, F + ke and F - ke, respectively, denote the families of graphs that can be obtained from F graphs by adding and deleting at most k edges, and F + kv denotes the family of graphs that can be made into F graphs by deleting at most k vertices.This paper is mainly concerned with the parameterized complexity of the vertex colouring problem on F + ke, F - ke and F - kv for various families F of graphs. In particular, it is shown that the vertex colouring problem is fixed-parameter tractable (linear time for each fixed k) for split + ke graphs and split - ke graphs, solvable in polynomial time for each fixed k but W[1]-hard for split + kv graphs. Furthermore, the problem is solvable in linear time for bipartite + 1v graphs and bipartite + 2e graphs but, surprisingly, NP-complete for bipartite + 2v graphs and bipartite + 3e graphs.",
"We describe simple linear time algorithms for coloring the squares of balanced and unbalanced quadtrees so that no two adjacent squares are given the same color. If squares sharing sides are defined as adjacent, we color balanced quadtrees with three colors, and unbalanced quadtrees with four colors; these results are both tight, as some quadtrees require this many colors. If squares sharing corners are defined as adjacent, we color balanced or unbalanced quadtrees with six colors; for some quadtrees, at least five colors are required.",
"",
"",
"We give a complete characterization of parameter graphs H for which the problem of coloring H-free graphs is polynomial and for which it is NP-complete. We further initiate a study of this problem for two forbidden subgraphs."
]
}
|
1402.4376
|
1509093258
|
We introduce a new notion of resilience for constraint satisfaction problems, with the goal of more precisely determining the boundary between NP-hardness and the existence of efficient algorithms for resilient instances. In particular, we study @math -resiliently @math -colorable graphs, which are those @math -colorable graphs that remain @math -colorable even after the addition of any @math new edges. We prove lower bounds on the NP-hardness of coloring resiliently colorable graphs, and provide an algorithm that colors sufficiently resilient graphs. We also analyze the corresponding notion of resilience for @math -SAT. This notion of resilience suggests an array of open questions for graph coloring and other combinatorial problems.
|
With little progress on coloring general graphs, research has naturally turned to approximation. In approximating the chromatic number of a general graph, the first results were of Garey and Johnson, giving a performance guarantee of @math colors @cite_17 and proving that it is NP-hard to approximate chromatic number to within a constant factor less than two @cite_0 . Further work improved this bound by logarithmic factors @cite_18 @cite_16 . In terms of lower bounds, Zuckerman @cite_3 derandomized the PCP-based results of H stad @cite_7 to prove the best known approximability lower-bound to date, @math .
|
{
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_16",
"@cite_17"
],
"mid": [
"2092913540",
"2081254453",
"2126186592",
"2025904765",
"2153663865",
"39763826"
],
"abstract": [
"Approximate graph coloring takes as input a graph and returns a legal coloring which is not necessarily optimal. We improve the performance guarantee, or worst-case ratio between the number of colors used and the minimum number of colors possible, toO(n(log logn)3 (logn)3), anO(logn log logn) factor better than the previous best-known result.",
"",
"A randomness extractor is an algorithm which extracts randomness from a low-quality random source, using some additional truly random bits. We construct new extractors which require only log n + O(1) additional random bits for sources with constant entropy rate. We further construct dispersers, which are similar to one-sided extractors, which use an arbitrarily small constant times log n additional random bits for sources with constant entropy rate. Our extractors and dispersers output 1-α fraction of the randomness, for any α>0.We use our dispersers to derandomize results of Hastad [23] and Feige-Kilian [19] and show that for all e>0, approximating MAX CLIQUE and CHROMATIC NUMBER to within n1-e are NP-hard. We also derandomize the results of Khot [29] and show that for some γ > 0, no quasi-polynomial time algorithm approximates MAX CLIQUE or CHROMATIC NUMBER to within n 2(log n)1-γ, unless NP = P.Our constructions rely on recent results in additive number theory and extractors by Bourgain-Katz-Tao [11], Barak-Impagliazzo-Wigderson [5], Barak-Kindler-Shaltiel-Sudakov-Wigderson [6], and Raz [36]. We also simplify and slightly strengthen key theorems in the second and third of these papers, and strengthen a related theorem by Bourgain [10].",
"Graph coloring problems, in which one would like to color the vertices of a given graph with a small number of colors so that no two adjacent vertices receive the same color, arise in many applications, including various scheduling and partitioning problems. In this paper the complexity and performance of algorithms which construct such colorings are investigated. For a graph G , let k( G ) denote the minimum possible number of colors required to color G and, for any graph coloring algorithm A , let A ( G ) denote the number of colors used by A when applied to G . Since the graph coloring problem is known to be “NP-complete,” it is considered unlikely that any efficient algorithm can guarantee A ( G ) = k( G ) for all input graphs. In this paper it is proved that even coming close to khgr;( G ) with a fast algorithm is hard. Specifically, it is shown that if for some constant r d there exists a polynomial-time algorithm A which guarantees A ( G ) ≤ r ·k( G ) + d , then there also exists a polynomial-time algorithm A which guarantees A ( G ) = k( G ).",
"Abstract We present an approximation algorithm for graph coloring which achieves a performance guarantee of O(n( log log n) 2 ( log n) 3 ) , a factor of log log n improvement.",
""
]
}
|
1402.4376
|
1509093258
|
We introduce a new notion of resilience for constraint satisfaction problems, with the goal of more precisely determining the boundary between NP-hardness and the existence of efficient algorithms for resilient instances. In particular, we study @math -resiliently @math -colorable graphs, which are those @math -colorable graphs that remain @math -colorable even after the addition of any @math new edges. We prove lower bounds on the NP-hardness of coloring resiliently colorable graphs, and provide an algorithm that colors sufficiently resilient graphs. We also analyze the corresponding notion of resilience for @math -SAT. This notion of resilience suggests an array of open questions for graph coloring and other combinatorial problems.
|
There has been much recent interest in coloring graphs which are already known to be colorable while minimizing the number of colors used. For a 3-colorable graph, Wigderson gave an algorithm using at most @math colors @cite_24 , which Blum improved to @math @cite_23 . A line of research improved this bound still further to @math @cite_28 . Despite the difficulties in improving the constant in the exponent, and as suggested by Arora @cite_22 , there is no evidence that coloring a 3-colorable graph with as few as @math colors is hard.
|
{
"cite_N": [
"@cite_24",
"@cite_28",
"@cite_22",
"@cite_23"
],
"mid": [
"1982506822",
"1529433769",
"1947552815",
"2036660814"
],
"abstract": [
"The performance guarantee of a graph coloring algorithm is the worst case ratio between the number of colors it uses on the input graph and the chromauc number of this graph. The previous best known polynomial-time algorithm had a performance guarantee O(n logn) for graphs on n vertices. This result stood unchallenged for eight years. This paper presents an efficient algorithm with performance guarantee of O(n(loglog n)2 (logn)2).",
"We show how the results of Karger, Motwani, and Sudan (1994) and Blum (1994) can be combined in a natural manner to yield a polynomial-time algorithm for O(n314)-coloring any n-node 3-colorable graph. This improves on the previous best bound of O(n14) colors (, 1994).",
"How to color 3 colorable graphs with few colors is a problem of longstanding interest. The best polynomial-time algorithm uses n0.2072 colors. There are no indications that coloring using say O(log n) colors is hard. It has been suggested that SDP hierarchies could be used to design algorithms that use ne colors for arbitrarily small e > 0. We explore this possibility in this paper and find some cause for optimism. While the case of general graphs is till open, we can analyse the Lasserre relaxation for two interesting families of graphs. For graphs with low threshold rank (a class of graphs identified in the recent paper of Arora, Barak and Steurer on the unique games problem), Lasserre relaxations can be used to find an independent set of size Ω(n) (i.e., progress towards a coloring with O(log n) colors) in nO(D) time, where D is the threshold rank of the graph. This algorithm is inspired by recent work of Barak, Raghavendra, and Steurer on using Lasserre Hierarchy for unique games. The algorithm can also be used to show that known integrality gap instances for SDP relaxations like strict vector chromatic number cannot survive a few rounds of Lasserre lifting, which also seems reason for optimism. For distance transitive graphs of diameter Δ, we can show how to color them using O(log n) colors in n2O(Δ) time. This family is interesting because the family of graphs of diameter O(1 e) is easily seen to be complete for coloring with ne colors. The distance-transitive property implies that the graph \"looks\" the same in all neighborhoods. The full version of this paper can be found at: http: www.cs.princeton.edu rongge LasserreColoring.pdf .",
"The problem of coloring a graph with the minimum number of colorsis well known to be NP-hard, even restricted to k -colorable graphs for constant k ≥ 3. This paper explores theapproximation problem of coloring k -colorable graphs with as fewadditional colors as possible in polynomial time, with special focus onthe case of k = 3. The previous best upper bound on the number of colors needed forcoloring 3-colorable n -vertex graphsin polynomial time was o n log n colors by Berger and Rompel, improving a bound of o n colors by Wigderson. This paper presents an algorithmto color any 3-colorable graph with o n3 8 polylog n colors, thus breaking an“ O((n 1 2-o(1) ) barrier”. The algorithm given here is based on examiningsecond-order neighborhoods of vertices, rather than just immediateneighborhoods of vertices as in previous approaches. We extend ourresults to improve the worst-case bounds for coloring k -colorable graphs for constant k > 3 as well."
]
}
|
1402.4376
|
1509093258
|
We introduce a new notion of resilience for constraint satisfaction problems, with the goal of more precisely determining the boundary between NP-hardness and the existence of efficient algorithms for resilient instances. In particular, we study @math -resiliently @math -colorable graphs, which are those @math -colorable graphs that remain @math -colorable even after the addition of any @math new edges. We prove lower bounds on the NP-hardness of coloring resiliently colorable graphs, and provide an algorithm that colors sufficiently resilient graphs. We also analyze the corresponding notion of resilience for @math -SAT. This notion of resilience suggests an array of open questions for graph coloring and other combinatorial problems.
|
On the other hand there are asymptotic and concrete lower bounds. Khot @cite_9 proved that for sufficiently large @math it is NP-hard to color a @math -colorable graph with fewer than @math colors; this was improved by Huang to @math @cite_25 . It is also known that for every constant @math there exists a sufficiently large @math such that coloring a @math -colorable graph with @math colors is NP-hard @cite_20 . In the non-asymptotic case, Khanna, Linial, and Safra @cite_2 used the PCP theorem to prove it is NP-hard to 4-color a 3-colorable graph, and more generally to color a @math colorable graph with at most @math colors. Guruswami and Khanna give an explicit reduction for @math @cite_4 . Assuming a variant of Khot's 2-to-1 conjecture, prove that distinguishing between chromatic number @math and @math is hard for constants @math @cite_20 . This is the best conditional lower bound we give in , but it does not to our knowledge imply Theorem .
|
{
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_2",
"@cite_25",
"@cite_20"
],
"mid": [
"1969999211",
"2132991500",
"2104254886",
"1492972816",
""
],
"abstract": [
"We give a new proof showing that it is NP-hard to color a 3-colorable graph using just 4 colors. This result is already known , [S. Khanna, N. Linial, and S. Safra, Combinatorica, 20 (2000), pp. 393--415], but our proof is novel because it does not rely on the PCP theorem, while the known one does. This highlights a qualitative difference between the known hardness result for coloring 3-colorable graphs and the factor @math hardness for approximating the chromatic number of general graphs, as the latter result is known to imply (some form of) PCP theorem [M. Bellare, O. Goldreich, and M. Sudan, SIAM J. Comput., 27 (1998), pp. 805--915]. Another aspect in which our proof is novel is in its use of the PCP theorem to show that 4-coloring of 3-colorable graphs remains NP-hard even on bounded-degree graphs (this hardness result does not seem to follow from the earlier reduction of Khanna, Linial, and Safra). We point out that such graphs can always be colored using O(1) colors by a simple greedy algorithm, while the best known algorithm for coloring (general) 3-colorable graphs requires @math colors. Our proof technique also shows that there is an @math such that it is NP-hard to legally 4-color even a @math fraction of the edges of a 3-colorable graph.",
"The author presents improved inapproximability results for three problems: the problem of finding the maximum clique size in a graph, the problem of finding the chromatic number of a graph, and the problem of coloring a graph with a small chromatic number with a small number of colors. J. Hastad's (1996) result shows that the maximum clique size in a graph with n vertices is inapproximable in polynomial time within a factor n sup 1- spl epsi or arbitrarily small constant spl epsi >0 unless NP=ZPP. We aim at getting the best subconstant value of spl epsi in Hastad's result. We prove that clique size is inapproximable within a factor n 2((log n)) sup 1-y corresponding to spl epsi =1 (log n) sup spl gamma for some constant spl gamma >0 unless NP spl sube ZPTIME(2((log n)) sup O(1) ). This improves the previous best inapproximability factor of n 2 sup O(log n spl radic log log n) (corresponding to spl epsi =O(1 spl radic log log n)) due to L. Engebretsen and J. Holmerin (2000). A similar result is obtained for the problem of approximating chromatic number of a graph. We also present a new hardness result for approximate graph coloring. We show that for all sufficiently large constants k, it is NP-hard to color a k-colorable graph with k sup 1 25 (log k) colors. This improves a result of M. Furer (1995) that for arbitrarily small constant spl epsi >0, for sufficiently large constants k, it is hard to color a k-colorable graph with k sup 3 2- spl epsi colors.",
"The paper considers the computational hardness of approximating the chromatic number of a graph. The authors first give a simple proof that approximating the chromatic number of a graph to within a constant power (of the value itself) in NP-hard. They then consider the hardness of coloring a 3-colorable graph with as few as possible colors. They show that determining whether a graph is 3-colorable or any legal coloring of it requires at least 5 colors is NP-hard. Therefore, coloring a 3-colorable graph with 4 colors is NP-hard. >",
"We prove that for sufficiently large K, it is NP-hard to color K-colorable graphs with less than (2^ (K^ 1 3 ) ) colors. This improves the previous result of K versus (K^ 1 25 K ) in Khot [1].",
""
]
}
|
1402.4258
|
2953283380
|
The focus of this article is to develop computationally efficient mathematical morphology operators on hypergraphs. To this aim we consider lattice structures on hypergraphs on which we build morphological operators. We develop a pair of dual adjunctions between the vertex set and the hyper edge set of a hypergraph H, by defining a vertex-hyperedge correspondence. This allows us to recover the classical notion of a dilation erosion of a subset of vertices and to extend it to subhypergraphs of H. Afterward, we propose several new openings, closings, granulometries and alternate sequential filters acting (i) on the subsets of the vertex and hyperedge set of H and (ii) on the subhypergraphs of a hypergraph.
|
The theory of hypergraphs originated as a natural generalisation of graphs in 1960s. In a hypergraph, edges can connect any number of vertices and are called hyperedges. Considering the topological and geometrical aspects of an image, Bretto @cite_14 has proposed a hypergraph model to represent an image. The theory of hypergraphs became an active area of research in image analysis @cite_4 , @cite_17 . The study of mathematical morphology operators on hypergraphs started recently, and little work being reported in this regard. Properties of morphological operators on hypergraphs are studied in @cite_10 , in which subhypergraphs are considered as relations on hypergraphs. Recently, Bloch and Bretto @cite_5 introduced mathematical morphology on hypergraphs by forming various lattices on hypergraphs. Similarity and pseudo-metrics based on mathematical morphology are defined and illustrated in @cite_1 . Based on these morphological operators, similarity measures are used for classification of data represented as hypergraphs @cite_4 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_1",
"@cite_5",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"",
"2264525079",
"32519779",
"1821479108",
"2082622657"
],
"abstract": [
"",
"",
"In the framework of structural representations for applications in image understanding, we establish links between similarities, hypergraph theory and mathematical morphology. We propose new similarity measures and pseudo-metrics on lattices of hypergraphs based on morphological operators. New forms of these operators on hypergraphs are introduced as well. Some examples based on various dilations and openings on hypergraphs illustrate the relevance of our approach.",
"In this article we introduce mathematical morphology on hypergraphs. We first define lattice structures and then mathematical morphology operators on hypergraphs. We show some relations between these operators and the hypergraph structure, considering in particular duality and similarity aspects.",
"A relation on a hypergraph is a binary relation on the set consisting of the nodes and hyperedges together, and which satisfies a constraint involving the incidence structure of the hypergraph. These relations correspond to join-preserving mappings on the lattice of sub-hypergraphs. This paper studies the algebra of these relations, in particular the analogues of the familiar operations of complement and converse of relations. When generalizing from relations on a set to relations on a hypergraph we find that the Boolean algebra of relations is replaced by a weaker structure: a pair of isomorphic bi-Heyting algebras, one of which arises from the relations on the dual hypergraph. The paper also considers the representation of sub-hypergraphs as relations and applies the results obtained to mathematical morphology for hypergraphs.",
"Abstract An algorithm is designed for the hypergraph (HG) representation of an image, subsequent detection of Salt and Pepper (SP) noise in the image and finally the restoration of the image from this noise. The image is first represented as the set union of hyperedges. As for the hyperedges themselves, these are determined by two Image Neighborhood Hypergraph (INHG) parameters, with the concepts of 8-bit neighborhood and INHG of a graph being central. The images taken up for experimental analyses are subjected to the Contra Harmonic Mean (CHM) filter for SP noise removal. The proposed algorithm exhibits superiority over traditional algorithms and recently proposed ones in terms of visual quality, Peak Signal to Noise Ratio (PSNR) and Mean Absolute Error (MAE). This superior performance of the CHM Filter is solely due to the HG representation of the test images."
]
}
|
1402.4304
|
2950326163
|
This paper presents the beginnings of an automatic statistician, focusing on regression problems. Our system explores an open-ended space of statistical models to discover a good explanation of a data set, and then produces a detailed report with figures and natural-language text. Our approach treats unknown regression functions nonparametrically using Gaussian processes, which has two important consequences. First, Gaussian processes can model functions in terms of high-level properties (e.g. smoothness, trends, periodicity, changepoints). Taken together with the compositional structure of our language of models this allows us to automatically describe functions in simple terms. Second, the use of flexible nonparametric models and a rich language for composing them in an open-ended manner also results in state-of-the-art extrapolation performance evaluated over 13 real time series data sets from various domains.
|
@cite_2 devote 4 pages to manually constructing a composite kernel to model a time series of carbon dioxode concentrations. In the supplementary material, we include a report automatically generated by for this dataset; our procedure chose a model similar to the one they constructed by hand. Other examples of papers whose main contribution is to manually construct and fit a composite kernel are @cite_1 and @cite_3 .
|
{
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_2"
],
"mid": [
"2114001875",
"",
"2031823405"
],
"abstract": [
"From the Publisher: This is a complete revision of a classic, seminal, and authoritative book that has been the model for most books on the topic written since 1970. It focuses on practical techniques throughout, rather than a rigorous mathematical treatment of the subject. It explores the building of stochastic (statistical) models for time series and their use in important areas of application forecasting, model specification, estimation, and checking, transfer function modeling of dynamic relationships, modeling the effects of intervention events, and process control. Features sections on: recently developed methods for model specification, such as canonical correlation analysis and the use of model selection criteria; results on testing for unit root nonstationarity in ARIMA processes; the state space representation of ARMA models and its use for likelihood estimation and forecasting; score test for model checking; and deterministic components and structural components in time series models and their estimation based on regression-time series model methods.",
"",
"While classical kernel-based classifiers are based on a single kernel, in practice it is often desirable to base classifiers on combinations of multiple kernels. (2004) considered conic combinations of kernel matrices for the support vector machine (SVM), and showed that the optimization of the coefficients of such a combination reduces to a convex optimization problem known as a quadratically-constrained quadratic program (QCQP). Unfortunately, current convex optimization toolboxes can solve this problem only for a small number of kernels and a small number of data points; moreover, the sequential minimal optimization (SMO) techniques that are essential in large-scale implementations of the SVM cannot be applied because the cost function is non-differentiable. We propose a novel dual formulation of the QCQP as a second-order cone programming problem, and show how to exploit the technique of Moreau-Yosida regularization to yield a formulation to which SMO techniques can be applied. We present experimental results that show that our SMO-based algorithm is significantly more efficient than the general-purpose interior point methods available in current optimization toolboxes."
]
}
|
1402.4246
|
1585534068
|
Raptor codes are the first class of fountain codes with linear time encoding and decoding. These codes are recommended in standards such as Third Generation Partnership Project (3GPP) and digital video broadcasting. RaptorQ codes are an extension to Raptor codes, having better coding efficiency and flexibility. Standard Raptor and RaptorQ codes are systematic with equal error protection of the data. However, in many applications such as MPEG transmission, there is a need for Unequal Error Protection (UEP): namely, some data symbols require higher error correction capabilities compared to others. We propose an approach that we call Priority Based Precode Ratio (PBPR) to achieve UEP for systematic RaptorQ and Raptor codes. Our UEP assumes that all symbols in a source block belong to the same importance class. The UEP is achieved by changing the number of precode symbols depending on the priority of the information symbols in the source block. PBPR provides UEP with the same number of decoding overhead symbols for source blocks with different importance classes. We demonstrate consistent improvements in the error correction capability of higher importance class compared to the lower importance class across the entire range of channel erasure probabilities. We also show that PBPR does not result in a significant increase in decoding and encoding times compared to the standard implementation.
|
A novel scheme of UEP for Raptor family of codes was first proposed by Rahnavard et. al. @cite_3 . It is a generic scheme that can be used for any Fountain code. This method was subsequently improved to produce different variations, such as the one proposed in @cite_4 . The UEP is achieved by modifying the selection probability of the source symbols for check symbol generation in these schemes. The precode (LDPC encoding decoding) and LT coding are treated as two separate independent phases and UEP is applied either at LT coding (UEP-LT codes) or at precode (UEP-LDPC codes) or both. The underlying idea is to select the source symbols from higher importance classes with higher probability compared to those from lower importance classes for check symbol generation. Although such schemes are very effective, they cannot be used with standard Raptor codes which are systematic.
|
{
"cite_N": [
"@cite_4",
"@cite_3"
],
"mid": [
"2061883459",
"2134499105"
],
"abstract": [
"In this paper, we propose a new scheme to construct Raptor codes that can provide unequal error protection (UEP) and unequal recovery time (URT) properties. We use joint optimisation of UEP-LDPC codes and UEP-LT codes to improve the performance of more important bits (MIB), and derive unequal density evolution (UDE) formulas over binary erasure channels (BEC). Using the UDE formulas, we optimise the UEP-Raptor codes. For the finite length case, we compare our UEP-Raptor codes with the previous UEP-Raptor code. Simulation results show that, compared with the previous UEP-Raptor code, the proposed UEP-Raptor codes can get better performances of both MIB and less important bits (LIB). Copyright © 2009 John Wiley & Sons, Ltd.",
"In this correspondence, a generalization of rateless codes is proposed. The proposed codes provide unequal error protection (UEP). The asymptotic properties of these codes under the iterative decoding are investigated. Moreover, upper and lower bounds on maximum-likelihood (ML) decoding error probabilities of finite-length LT and Raptor codes for both equal and unequal error protection schemes are derived. Further, our work is verified with simulations. Simulation results indicate that the proposed codes provide desirable UEP. We also note that the UEP property does not impose a considerable drawback on the overall performance of the codes. Moreover, we discuss that the proposed codes can provide unequal recovery time (URT). This means that given a target bit error rate, different parts of information bits can be decoded after receiving different amounts of encoded bits. This implies that the information bits can be recovered in a progressive manner. This URT property may be used for sequential data recovery in video audio streaming"
]
}
|
1402.4246
|
1585534068
|
Raptor codes are the first class of fountain codes with linear time encoding and decoding. These codes are recommended in standards such as Third Generation Partnership Project (3GPP) and digital video broadcasting. RaptorQ codes are an extension to Raptor codes, having better coding efficiency and flexibility. Standard Raptor and RaptorQ codes are systematic with equal error protection of the data. However, in many applications such as MPEG transmission, there is a need for Unequal Error Protection (UEP): namely, some data symbols require higher error correction capabilities compared to others. We propose an approach that we call Priority Based Precode Ratio (PBPR) to achieve UEP for systematic RaptorQ and Raptor codes. Our UEP assumes that all symbols in a source block belong to the same importance class. The UEP is achieved by changing the number of precode symbols depending on the priority of the information symbols in the source block. PBPR provides UEP with the same number of decoding overhead symbols for source blocks with different importance classes. We demonstrate consistent improvements in the error correction capability of higher importance class compared to the lower importance class across the entire range of channel erasure probabilities. We also show that PBPR does not result in a significant increase in decoding and encoding times compared to the standard implementation.
|
Another method used for achieving UEP in case of fountain codes is to divide the information symbols into multiple overlapping and expanding windows. The symbols with higher priority are made part of more number of these windows compared to symbols of lower priority, which increases the recovery probability of symbols with higher priority. This method is applied to Raptor codes by partitioning the source symbols into multiple overlapping windows to achieve better error correction @cite_11 .
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2109361284"
],
"abstract": [
"A novel approach to provide unequal error protection (UEP) using rateless codes over erasure channels, named Expanding Window Fountain (EWF) codes, is developed and discussed. EWF codes use a windowing technique rather than a weighted (non-uniform) selection of input symbols to achieve UEP property. The windowing approach introduces additional parameters in the UEP rateless code design, making it more general and flexible than the weighted approach. Furthermore, the windowing approach provides better performance of UEP scheme, which is confirmed both theoretically and experimentally."
]
}
|
1402.4246
|
1585534068
|
Raptor codes are the first class of fountain codes with linear time encoding and decoding. These codes are recommended in standards such as Third Generation Partnership Project (3GPP) and digital video broadcasting. RaptorQ codes are an extension to Raptor codes, having better coding efficiency and flexibility. Standard Raptor and RaptorQ codes are systematic with equal error protection of the data. However, in many applications such as MPEG transmission, there is a need for Unequal Error Protection (UEP): namely, some data symbols require higher error correction capabilities compared to others. We propose an approach that we call Priority Based Precode Ratio (PBPR) to achieve UEP for systematic RaptorQ and Raptor codes. Our UEP assumes that all symbols in a source block belong to the same importance class. The UEP is achieved by changing the number of precode symbols depending on the priority of the information symbols in the source block. PBPR provides UEP with the same number of decoding overhead symbols for source blocks with different importance classes. We demonstrate consistent improvements in the error correction capability of higher importance class compared to the lower importance class across the entire range of channel erasure probabilities. We also show that PBPR does not result in a significant increase in decoding and encoding times compared to the standard implementation.
|
UEP for systematic Raptor codes has been proposed recently with a new design for Raptor codes @cite_0 , which is different from standard design. The intermediate symbols are generated in a manner which is similar to the standard scheme with a generator matrix and these symbols are encoded using separate LDPC and LT encoding steps to generate encoding symbols. The UEP is achieved by partitioning encoding matrices into different sub-matrices and changing the properties of these sub-matrices to group the symbols into different important classes.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2094482410"
],
"abstract": [
"The third generation partnership project (3GPP) and digital video broadcasting-handheld standards recommend systematic Raptor codes as application-layer forward error correction for reliable transmission of multimedia data. In all previous studies on systematic Raptor codes, equal error protection for all data was considered. However, in many applications, multimedia data requires unequal error protection (UEP) that provides different levels of protection to different parts of multimedia data. In this paper, we propose a new design method for Raptor codes that provide both UEP and systematic properties over binary erasure channels. Numerical results show that the proposed UEP design is effective for reliable multi-level protection."
]
}
|
1402.4010
|
2130531711
|
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
|
The task-parallelization problem within the GRASS environment has been addressed by several authors in a variety of studies. For example, in @cite_7 , the authors present a collection of GRASS modules for a watershed analysis. Their work concentrates on different ways of slicing raster maps to take advantage of a Message Passing Interface (MPI) implementation.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"129124863"
],
"abstract": [
"This work proposes a joint implementation of spatially distributed runoff and soil erosion analysis in watersheds allowing subsequent modelization of nutrients transport processes originating from distributed sources. Implemented relying on the open source GRASS (Geographic Resources Analysis Support System) GIS (Geographical Information System), a new design for the raster operation routines is specially created to take advantage of the MPI possibilities and available GRID resources."
]
}
|
1402.4010
|
2130531711
|
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
|
In the field of high-performance computing, the authors of @cite_22 presented implementation examples of a GRASS raster module, used to process vegetation indexes for satellite images, for MPI and Ninf-G environments. The authors acknowledge a limitation in the performance of their MPI implementation for big processing jobs. The restriction appears due to the computing nodes being fixed to a specific spatial range, since the input data are equally distributed among worker processes, creating an obstacle for load balancing in heterogeneous environments.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2660313958"
],
"abstract": [
"Abstract Satellite imagery provides a large amount of useful information. To extract this information and understand them may require huge computing power and processing time. Distributed computing can reduce the processing time by providing more computational power. GRASS, an open source software, has been used for processing the satellite images. To let the GRASS modules benefit from distributed computing, an example module r.vi is ported in both MPI (r.vi.mpi) and Ninf-G (r.vi.grid) programming modules. Their implementation methodologies are the main discussion issue of this paper which will guide the basic way of representing any GRASS raster module in distributed platform. Additionally, a comparative study on modified r.vi, r.vi.mpi and r.vi.grid is presented here."
]
}
|
1402.4010
|
2130531711
|
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
|
Using a master-worker technique, the work by @cite_38 abstracts the GRASS data types into its own and MPI data types, thus not requiring the GRASS in the worker nodes. The data are evenly distributed by row among the workers, with each one receiving an exclusive column extent to work on. The test cluster contains heterogeneous hardware configurations. The authors note that data-set size is bounded by the amount of memory on each of the nodes, since they allocate the memory for the whole map as part of the set-up stage, before starting the calculation. Regarding the data sets during the simulations, the largest one contains 3,265,110 points. They conclude that the data-set size should be large enough for the communication overhead to be hidden by the calculation time, so that the parallelization pays off.
|
{
"cite_N": [
"@cite_38"
],
"mid": [
"2085560840"
],
"abstract": [
"To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master slave (M S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance."
]
}
|
1402.4010
|
2130531711
|
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.
|
In @cite_5 , the authors employ a master-worker approach, using one worker process per worker node. The complete exploitation of the computing resources of a single computing node is achieved with OpenMP. The experimental environment features one host. The horizon-composition algorithm presents no calculation dependency among the spatial blocks. Consequently, the digital elevation model (DEM) may be divided into separate blocks to be independently calculated by each worker process. The authors present an improved algorithm that can also be used to accelerate other applications like visibility maps. The tasks are dynamically assigned to idle processes using a task-farming paradigm over the MPI.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1967371417"
],
"abstract": [
"This work presents a high-performance algorithm to compute the horizon in very large high-resolution DEMs. We used Stewart's algorithm as the core of our implementation and considered that the horizon has three components: the ground, near, and far horizons. To eliminate the edge-effect, we introduced a multi-resolution halo method. Moreover, we used a new data partition approach, to substantially increase the parallelism in the algorithm. In addition, several optimizations have been applied to considerably reduce the number of arithmetical operations in the core of the algorithm. The experimental results have demonstrated that by applying the above-described contributions, the proposed algorithm is more than twice faster than Stewart's algorithm while maintaining the same accuracy."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.