aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1203.3953
|
1989430134
|
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay ("nearsightedness") for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-met allic systems. We further discuss the case of density matrices for met allic systems at positive electronic temperature. A few other possible applications are also discussed.
|
It is interesting to compare these statements with two earlier ones by Stefan Goedecker. In @cite_9 he wrote (page 261):
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2068725607"
],
"abstract": [
"Electronic structure calculations which are based on Wannier, like localized orbitals or the related density matrix, are an alternative to conventional calculations based on extended orbitals. For large systems this approach is potentially faster since it offers O(N) scaling with respect to the number of atoms in the system. We derive a class of algorithms based on projection to calculate either the localized orbitals or the density matrix."
]
}
|
1203.3885
|
1551766946
|
The DEPAS (Decentralized Probabilistic Auto-Scaling) algorithm assumes an overlay network of computing nodes where each node probabilistically decides to shut down, allocate one or more other nodes or do nothing. DEPAS was formulated, tested, and theoretically analyzed for the simplified case of homogenous systems. In this paper, we extend DEPAS to heterogeneous systems.
|
In this paper we discuss a few other decentralized approaches to auto-scaling. A detailed state of the art on autonomic resource provisioning for cloud computing can be found in @cite_8 .
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"1985644694"
],
"abstract": [
"The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers’ solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our experiments (simulations and real deployments), which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches."
]
}
|
1203.3885
|
1551766946
|
The DEPAS (Decentralized Probabilistic Auto-Scaling) algorithm assumes an overlay network of computing nodes where each node probabilistically decides to shut down, allocate one or more other nodes or do nothing. DEPAS was formulated, tested, and theoretically analyzed for the simplified case of homogenous systems. In this paper, we extend DEPAS to heterogeneous systems.
|
A decentralized economic-inspired solution to the auto-scaling of component-based systems was proposed by @cite_10 . They use a multi-agent approach in which each server is managed by a server agent. This agent makes decisions related to the migration replication removal of the components deployed on that server. The problem is that each agent stores a complete mapping (maintained through gossiping) of components and servers. In other words, each agent has a complete view of the system. Because of that, although the approach is decentralized in the sense that there is no central manager, it is not scalable with respect to the number of components and servers.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2108108865"
],
"abstract": [
"Significant achievements have been made for automated allocation of cloud resources. However, the performance of applications may be poor in peak load periods, unless their cloud resources are dynamically adjusted. Moreover, although cloud resources dedicated to different applications are virtually isolated, performance fluctuations do occur because of resource sharing, and software or hardware failures (e.g. unstable virtual machines, power outages, etc.). In this paper, we propose a decentralized economic approach for dynamically adapting the cloud resources of various applications, so as to statistically meet their SLA performance and availability goals in the presence of varying loads or failures. According to our approach, the dynamic economic fitness of a Web service determines whether it is replicated or migrated to another server, or deleted. The economic fitness of a Web service depends on its individual performance constraints, its load, and the utilization of the resources where it resides. Cascading performance objectives are dynamically calculated for individual tasks in the application workflow according to the user requirements. By fully implementing our framework, we experimentally proved that our adaptive approach statistically meets the performance objectives under peak load periods or failures, as opposed to static resource settings."
]
}
|
1203.3885
|
1551766946
|
The DEPAS (Decentralized Probabilistic Auto-Scaling) algorithm assumes an overlay network of computing nodes where each node probabilistically decides to shut down, allocate one or more other nodes or do nothing. DEPAS was formulated, tested, and theoretically analyzed for the simplified case of homogenous systems. In this paper, we extend DEPAS to heterogeneous systems.
|
Another decentralized auto-scaling approach was proposed by @cite_7 . They aim to develop a Platform as a Service for hosting sites in the cloud. In their approach, each virtual machine is managed by a VM manager which is connected through a custom overlay network to other VM managers that store instances of the same sites. The utility of a site instance is computed as the ratio between the allocated CPU capacity and the CPU demand. The utility of the system is the minimum utility of all instances of all sites. A decentralized heuristic algorithm is used to maximize the utility of the system while minimizing the cost of adaptation. The resulting system is scalable with respect to the number of virtual machines and the number of sites, but it is not scalable with respect to the number of instances of a site.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2141576965"
],
"abstract": [
"We address the problem of resource management for a large-scale cloud environment that hosts sites. Our contribution centers around outlining a distributed middleware architecture and presenting one of its key elements, a gossip protocol that meets our design goals: fairness of resource allocation with respect to hosted sites, efficient adaptation to load changes and scalability in terms of both the number of machines and sites. We formalize the resource allocation problem as that of dynamically maximizing the cloud utility under CPU and memory constraints. While we can show that an optimal solution without considering memory constraints is straightforward (but not useful), we provide an efficient heuristic solution for the complete problem instead. We evaluate the protocol through simulation and find its performance to be well-aligned with our design goals."
]
}
|
1203.3575
|
2952461555
|
We investigate the hardness of establishing as many stable marriages (that is, marriages that last forever) in a population whose memory is placed in some arbitrary state with respect to the considered problem, and where traitors try to jeopardize the whole process by behaving in a harmful manner. On the negative side, we demonstrate that no solution that is completely insensitive to traitors can exist, and we propose a protocol for the problem that is optimal with respect to the traitor containment radius.
|
Self-stabilization @cite_16 @cite_7 @cite_15 is a versatile technique that permits forward recovery from any kind of transient faults, while Byzantine fault-tolerance @cite_2 is traditionally used to mask the effect of a limited number of malicious faults. In the context of self-stabilization, the first algorithm for computing a maximal marriage was given by Hsu and Huang @cite_17 . @cite_8 later gave a synchronous self-stabilizing variant of Hsu and Huang's algorithm. Finally, @cite_0 gave an algorithm for computing a maximal marriage under the distributed daemon. When it comes to improving the @math -approximation induced by the maximal mariage property, @cite_14 and Blair and Manne @cite_3 presented a framework that can be used for computing a maximum mariage in a tree, while @cite_6 gave a self-stabilizing algorithm for computing a @math -approximation in anonymous rings of length not divisible by three. later generalized this result to any arbitrary topology @cite_5 . Note that contrary to our proposal, none of the aforementioned marriage construction algorithms can tolerate Byzantine behaviour.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"",
"",
"2138720431",
"2105187651",
"146877070",
"2110600065",
"2120510885",
"2058465484",
"",
"2111132249",
"2088809694"
],
"abstract": [
"",
"",
"We propose two distributed algorithms to maintain, respectively, a maximal matching and a maximal independent set in a given ad hoc network; our algorithms are fault tolerant (reliable) in the sense that the algorithms can detect occasional link failures and or new link creations in the network (due to mobility of the hosts) and can readjust the global predicates. We provide time complexity analysis of the algorithms in terms of the number of rounds needed for the algorithm to stabilize after a topology change, where a round is defined as a period of time in which each node in the system receives beacon messages from all its neighbors. In any ad hoc network, the participating nodes periodically transmit beacon messages for message transmission as well as to maintain the knowledge of the local topology at the node; as a result, the nodes get the information about their neighbor nodes synchronously (at specific time intervals). Thus, the paradigm to analyze the complexity of the self-stabilizing algorithms in the context of ad hoc networks is very different from the traditional concept of an adversary daemon used in proving the convergence and correctness of self-stabilizing distributed algorithms in general.",
"Many proposed self-stabilizing algorithms require an exponential number of moves before stabilizing on a global solution, including some rooting algorithms for tree networks [1, 2, 3]. These results are vastly improved upon in [6] with tree rooting algorithms that require only O(n sup 3 + n sup 2 spl middot c sub h ) moves, where n is the number of nodes in the network and c sub h is the highest initial value of a variable. In the current paper, we describe a new set of tree rooting algorithms that brings the complexity down to O(n sup 2 ) moves. This not only reduces the first term by an order of magnitude, but also reduces the second term by an unbounded factor We further show a generic mapping that can be used to instantiate an efficient self-stabilizing tree algorithm from any traditional sequential tree algorithm that makes a single bottom-up pass through a rooted tree. The new generic mapping improves on the complexity of the technique presented in [8].",
"We present an anonymous self-stabilizing algorithm for finding a 1-maximal matching in trees, and rings of length not divisible by 3. We show that the algorithm converges in O(n) moves under an arbitrary central daemon.",
"The maximal matching problem has received considerable attention in the self-stabilizing community. Previous work has given several self-stabilizing algorithms that solve the problem for both the adversarial and the fair distributed daemon, the sequential adversarial daemon, as well as the synchronous daemon. In the following we present a single self-stabilizing algorithm for this problem that unites all of these algorithms in that it has the same time complexity as the previous best algorithms for the sequential adversarial, the distributed fair, and the synchronous daemon. In addition, the algorithm improves the previous best time complexities for the distributed adversarial daemon from O(n^2) and O(@dm) to O(m) where n is the number of processes, m is the number of edges, and @d is the maximum degree in the graph.",
"Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed.",
"The matching problem asks for a large set of disjoint edges in a graph. It is a problem that has received considerable attention in both the sequential and the self-stabilizing literature. Previous work has resulted in self-stabilizing algorithms for computing a maximal (12-approximation) matching in a general graph, as well as computing a 23-approximation on more specific graph types. In this paper, we present the first self-stabilizing algorithm for finding a 23-approximation to the maximum matching problem in a general graph. We show that our new algorithm, when run under a distributed adversarial daemon, stabilizes after at most O(n2) rounds. However, it might still use an exponential number of time steps.",
"",
"The synchronization task between loosely coupled cyclic sequential processes (as can be distinguished in, for instance, operating systems) can be viewed as keeping the relation “the system is in a legitimate state” invariant. As a result, each individual process step that could possibly cause violation of that relation has to be preceded by a test deciding whether the process in question is allowed to proceed or has to be delayed. The resulting design is readily—and quite systematically—implemented if the different processes can be granted mutually exclusive access to a common store in which “the current system state” is recorded.",
"Abstract We present a self-stabilizing algorithm for finding a maximal matching in distributed networks. Due to its self-stabilizing property, the algorithm can automatically detect and recover from the faults caused by unexpected perturbations on local variables maintained on each node of the system. A variant function is provided to prove the correctness of the algorithm and to analyze its time complexity."
]
}
|
1203.3575
|
2952461555
|
We investigate the hardness of establishing as many stable marriages (that is, marriages that last forever) in a population whose memory is placed in some arbitrary state with respect to the considered problem, and where traitors try to jeopardize the whole process by behaving in a harmful manner. On the negative side, we demonstrate that no solution that is completely insensitive to traitors can exist, and we propose a protocol for the problem that is optimal with respect to the traitor containment radius.
|
Making distributed systems tolerant to both transient and malicious faults is appealing yet proved difficult @cite_4 @cite_1 as impossibility results are expected in many cases (even with complete communication topology and in a synchronous setting). A promising path towards multi-tolerance to both transient and Byzantine faults is Byzantine containment. For local tasks ( tasks whose correctness can be checked locally, such as vertex coloring, link coloring, or dining philosophers), strict stabilization @cite_13 @cite_12 permits to contain the influence of malicious behavior to a fixed radius. This notion was further generalized for global tasks (such as spanning tree construction) using the notion of topology-aware strict stabilization @cite_18 @cite_11 . Our proposal is a strictly stabilizing maximal marriage protocol that has optimal containement radius.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_1",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"1817294132",
"1970745521",
"1546022395",
"1543193821",
"1541510840",
"2953267792"
],
"abstract": [
"Self-stabilization is a versatile approach to fault-tolerance since it permits a distributed system to recover from any transient fault that arbitrarily corrupts the contents of all memories in the system. Byzantine tolerance is an attractive feature of distributed systems that permits to cope with arbitrary malicious behaviors. We consider the well known problem of constructing a maximum metric tree in this context. Combining these two properties proves difficult: we demonstrate that it is impossible to contain the impact of Byzantine nodes in a self-stabilizing context for maximum metric tree construction (strict stabilization). We propose a weaker containment scheme called topology-aware strict stabilization, and present a protocol for computing maximum metric trees that is optimal for this scheme with respect to impossibility result.",
"We initiate a study of bounded clock synchronization under a more severe fault model than that proposed by Lamport and Melliar-Smith [1985]. Realistic aspects of the problem of synchronizing clocks in the presence of faults are considered. One aspect is that clock synchronization is an on-going task, thus the assumption that some of the processors never fail is too optimistic. To cope with this reality, we suggest self-stabilizing protocols that stabilize in any (long enough) period in which less than a third of the processors are faulty. Another aspect is that the clock value of each processor is bounded. A single transient fault may cause the clock to reach the upper bound. Therefore, we suggest a bounded clock that wraps around when appropriate.We present two randomized self-stabilizing protocols for synchronizing bounded clocks in the presence of Byzantine processor failures. The first protocol assumes that processors have a common pulse, while the second protocol does not. A new type of distributed counter based on the Chinese remainder theorem is used as part of the first protocol.",
"Awareness of the need for robustness in distributed systems increases as distributed systems become integral parts of day-to-day systems. Self-stabilizing while tolerating ongoing Byzantine faults are wishful properties of a distributed system. Many distributed tasks (e.g. clock synchronization) possess efficient non-stabilizing solutions tolerating Byzantine faults or conversely non-Byzantine but self-stabilizing solutions. In contrast, designing algorithms that self-stabilize while at the same time tolerating an eventual fraction of permanent Byzantine failures present a special challenge due to the “ambition” of malicious nodes to hamper stabilization if the systems tries to recover from a corrupted state. This difficulty might be indicated by the remarkably few algorithms that are resilient to both fault models. We present the first scheme that takes a Byzantine distributed algorithm and produces its self-stabilizing Byzantine counterpart, while having a relatively low overhead of O(f′) communication rounds, where f′ is the number of actual faults. Our protocol is based on a tight Byzantine self-stabilizing pulse synchronization procedure. The synchronized pulses are used as events for initializing Byzantine agreement on every node’s local state. The set of local states is used for global predicate detection. Should the global state represent an illegal system state then the target algorithm is reset.",
"An ideal approach to deal with faults in large-scale distributed systems is to contain the effects of faults as locally as possible and, additionally, to ensure some type of tolerance within each fault-affected locality. Existing results using this approach accommodate only limited faults (such as crashes) or assume that fault occurrence is bounded in space and or time. In this paper, we define and explore possibility impossibility of local tolerance with respect to arbitrary faults (such as Byzantine faults) whose occurrence may be unbounded in space and in time. Our positive results include programs for graph coloring and dining philosophers, with proofs that the size of their tolerance locality is optimal. The type of tolerance achieved within fault-affected localities is self-stabilization. That is, starting from an arbitrary state of the distributed system, each non-faulty process eventually reaches a state from where it behaves correctly as long as the only faults that occur henceforth (regardless of their number) are outside the locality of this process.",
"Self-stabilizing protocols can tolerate any type and any number of transient faults. However, in general, self-stabilizing protocols provide no guarantee about their behavior against permanent faults. This paper considers self-stabilizing link-coloring resilient to (permanent) Byzantine faults in arbitrary anonymous networks. First, we show that stabilizing link-coloring is impossible in anonymous cycles when there is no constraint on the spatial scheduling of processes. Then, given the assumption that no correct neighbors execute their actions simultaneously, we present a self-stabilizing link-coloring protocol that is also resilient to Byzantine faults in arbitrary anonymous networks. The protocol uses 2d-1 colors where d is the maximum degree in the network. This protocol guarantees that any link (u, v) between non faulty processes u and v is assigned a color within 2d+2 rounds and its color remains unchanged thereafter. Our protocol is Byzantine insensitive in the sense that the subsystem of correct processes remains operating properly in spite of unbounded Byzantine faults.",
"Self-stabilization is a versatile approach to fault-tolerance since it permits a distributed system to recover from any transient fault that arbitrarily corrupts the contents of all memories in the system. Byzantine tolerance is an attractive feature of distributed systems that permits to cope with arbitrary malicious behaviors. We consider the well known problem of constructing a breadth-first spanning tree in this context. Combining these two properties proves difficult: we demonstrate that it is impossible to contain the impact of Byzantine nodes in a strictly or strongly stabilizing manner. We then adopt the weaker scheme of topology-aware strict stabilization and we present a similar weakening of strong stabilization. We prove that the classical @math protocol has optimal Byzantine containment properties with respect to these criteria."
]
}
|
1203.2970
|
2137030586
|
In 802.11 WLANs, adapting the contention parameters to network conditions results in substantial performance improvements. Even though the ability to change these parameters has been available in standard devices for years, so far no adaptive mechanism using this functionality has been validated in a realistic deployment. In this paper we report our experiences with implementing and evaluating two adaptive algorithms based on control theory, one centralized and one distributed, in a large-scale testbed consisting of 18 commercial off-the-shelf devices. We conduct extensive measurements, considering different network conditions in terms of number of active nodes, link qualities, and data traffic. We show that both algorithms significantly outperform the standard configuration in terms of total throughput. We also identify the limitations inherent in distributed schemes, and demonstrate that the centralized approach substantially improves performance under a large variety of scenarios, which confirms its suitability for real deployments.
|
Centralized approaches. A significant number of approaches exists in the literature @cite_12 @cite_17 @cite_29 @cite_7 that use a single node to compute the set of MAC parameters to be used in the WLAN. With the exception of our CAC algorithm @cite_7 , the main drawbacks of these approaches are that they are either based on heuristics, thereby lacking analytical support for providing performance guarantees @cite_12 @cite_17 , or they do not consider the dynamics of the WLAN under realistic scenarios @cite_29 .
|
{
"cite_N": [
"@cite_29",
"@cite_7",
"@cite_12",
"@cite_17"
],
"mid": [
"2012321831",
"2111854969",
"2076624514",
"2133967495"
],
"abstract": [
"We investigate the optimal selection of minimum contention window values to achieve proportional fairness in a multirate IEEE 802.11e test-bed. Unlike other approaches, the proposed model accounts for the contention-based nature of 802.11's MAC layer operation and considers the case where stations can have different weights corresponding to different throughput classes. Our test-bed evaluation considers both the long-term throughput achieved by wireless stations and the short-term fairness. When all stations have the same transmission rate, optimality is achieved when a station's throughput is proportional to its weight factor, and the optimal minimum contention windows also maximize the aggregate throughput. When stations have different transmission rates, the optimal minimum contention window for high rate stations is smaller than for low rate stations. Furthermore, we compare proportional fairness with time-based fairness, which can be achieved by adjusting packet sizes so that low and high rate stations have equal successful transmission times, or by adjusting the transmission opportunity (TXOP)limit so that high rate stations transmit multiple back-to-back packets and thus occupy the channel for the same time as low rate stations that transmit a single packet. The test-bed experiments show that when stations have different transmission rates and the same weight, proportional fairness achieves higher performance than the time-based fairness approaches, in terms of both aggregate utility and throughput.",
"The MAC layer of the 802.11 standard, based on the CSMA CA mechanism, specifies a set of parameters to control the aggressiveness of stations when trying to access the channel. However, these parameters are statically set independently of the conditions of the WLAN (e.g. the number of contending stations), leading to poor performance for most scenarios. To overcome this limitation previous work proposes to adapt the value of one of those parameters, namely the CW, based on an estimation of the conditions of the WLAN. However, these approaches suffer from two major drawbacks: i) they require extending the capabilities of standard devices or ii) are based on heuristics. In this paper we propose a control theoretic approach to adapt the CW to the conditions of the WLAN, based on an analytical model of its operation, that is fully compliant with the 802.11e standard. We use a Proportional Integrator controller in order to drive the WLAN to its optimal point of operation and perform a theoretic analysis to determine its configuration. We show by means of an exhaustive performance evaluation that our algorithm maximizes the total throughput of the WLAN and substantially outperforms previous standard-compliant proposals.",
"This paper introduces a mechanism which dynamically tunes the parameters of the 802.11e contention-based access method. The proposed mechanism aims at providing QoS as well as ameliorating the problem of delay asymmetry",
"A number of works have tried to adjust the contention window in order to provide differentiated quality of service in IEEE 802.11-based wireless networks. By giving different service classes different CWs, the distribution of backoff intervals (chosen randomly, on the interval [O, CW]) will reflect the desired service classes. However, these protocols cannot deliver firm service guarantees while maintaining high network utilization, particularly under congested network conditions. In this article we propose a new MAC protocol featuring a sliding CW (SCW) for each network flow. The SCW dynamically adjusts to changing network conditions, but remains within a per-class predefined range in order to maintain a separation between different service classes. Each flow's SCW reacts based on the degree to which class-defined QoS metrics are satisfied. Simulation results show that compared to the enhanced distributed coordination function (EDCF) scheme of 802.11e, SCW consistently excels, in terms of network utilization, strict service separation, and service-level fairness."
]
}
|
1203.2970
|
2137030586
|
In 802.11 WLANs, adapting the contention parameters to network conditions results in substantial performance improvements. Even though the ability to change these parameters has been available in standard devices for years, so far no adaptive mechanism using this functionality has been validated in a realistic deployment. In this paper we report our experiences with implementing and evaluating two adaptive algorithms based on control theory, one centralized and one distributed, in a large-scale testbed consisting of 18 commercial off-the-shelf devices. We conduct extensive measurements, considering different network conditions in terms of number of active nodes, link qualities, and data traffic. We show that both algorithms significantly outperform the standard configuration in terms of total throughput. We also identify the limitations inherent in distributed schemes, and demonstrate that the centralized approach substantially improves performance under a large variety of scenarios, which confirms its suitability for real deployments.
|
Distributed approaches. Several works @cite_4 @cite_0 @cite_19 @cite_22 @cite_27 have proposed mechanisms that independently adjust the backoff operation of each stations in the WLAN. The main disadvantages of these approaches are that they change the rules of the IEEE 802.11 standard and therefore require introducing significant hardware or firm-ware modifications.
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_0",
"@cite_19",
"@cite_27"
],
"mid": [
"2040476490",
"2145859593",
"2107830311",
"2104935654",
"2021521923"
],
"abstract": [
"This paper presents a theoretical study on distributed contention window control algorithms for achieving arbitrary bandwidth allocation policies and efficient channel utilization. By modeling different bandwidth allocation policies as an optimal contention window assignment problem, the authors design a general and fully distributed contention window control algorithm, called general contention window adaptation (GCA), and prove that it converges to the solution of the contention window assignment problem. By examining the stability of GCA, we identify the optimal stable point that maximizes channel utilization and provide solutions to control the stable point near the optimal point. Due to the generality of GCA, this work provides a theoretical foundation to analyze existing and design new contention window control algorithms.",
"IEEE 802.11 is the standard for wireless local area networks (WLANs) promoted by the Institute of Electrical and Electronics Engineers. Wireless technologies in the LAN environment are becoming increasingly important and the IEEE 802.11 is the most mature technology to date. Previous works have pointed out that the standard protocol can be very inefficient and that an appropriate tuning of its congestion control mechanism (i.e., the backoff algorithm) can drive the IEEE 802.11 protocol close to its optimal behavior. To perform this tuning, a station must have exact knowledge of the network contention level; unfortunately, in a real case, a station cannot have exact knowledge of the network contention level (i.e., number of active stations and length of the message transmitted on the channel), but it, at most, can estimate it. We present and evaluate a distributed mechanism for contention control in IEEE 802.11 wireless LANs. Our mechanism, named asymptotically optimal backoff (AOB), dynamically adapts the backoff window size to the current network contention level and guarantees that an IEEE 802.11 WLAN asymptotically achieves its optimal channel utilization. The AOB mechanism measures the network contention level by using two simple estimates: the slot utilization and the average size of transmitted frames. These estimates are simple and can be obtained by exploiting information that is already available in the standard protocol. AOB can be used to extend the standard 802.11 access mechanism without requiring any additional hardware. The performance of the IEEE 802.11 protocol, with and without the AOB mechanism, is investigated through simulation. Simulation results indicate that our mechanism is very effective, robust, and has traffic differentiation potentialities.",
"The IEEE 802.11 medium access control (MAC) protocol provides a contention-based distributed channel access mechanism for mobile stations to share the wireless medium, which may introduce a lot of collisions in case of overloaded active stations. Slow contention window (CW) decrease scheme is a simple and efficient solution for this problem. In this paper, we use an analytical model to compare the slow CW decrease scheme to the IEEE 802.11 MAC protocol. Several parameters are investigated such as the number of stations, the initial CW size, the decrease factor value, the maximum backoff stage and the coexistence with the RequestToSend and ClearToSend (RTS CTS) mechanism. The results show that the slow CW decrease scheme can efficiently improve the throughput of IEEE 802.11, and that the throughput gain is higher when the decrease factor is larger. Moreover, the initial CW size and maximum backoff stage also affect the performance of slow CW decrease scheme.",
"We consider wireless LANs such as IEEE 802.11 operating in the unlicensed radio spectrum. While their nominal bit rates have increased considerably, the MAC layer remains practically unchanged despite much research effort spent on improving its performance. We observe that most proposals for tuning the access method focus on a single aspect and disregard others. Our objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.We propose a novel access method derived from 802.11 DCF [2] (Distributed Coordination Function) in which all hosts use similar values of the contention window CW to benefit from good short-term access fairness. We call our method Idle Sense, because each host observes the mean number of idle slots between transmission attempts to dynamically control its contention window. Unlike other proposals, Idle Sense enables each host to estimate its frame error rate, which can be used for switching to the right bit rate. We present simulations showing how the method leads to high throughput, low collision overhead, and low delay. The method also features fast reactivity and time-fair channel allocation.",
"In WLANs, the medium access control (MAC) protocol is the main element that determines the efficiency of sharing the limited communication bandwidth of the wireless channel. The fraction of channel bandwidth used by successfully transmitted messages gives a good indication of the protocol efficiency, and its maximum value is referred to as protocol capacity. In a previous paper we have derived the theoretical limit of the IEEE 802.11 MAC protocol capacity. In addition, we showed that if a station has an exact knowledge of the network status, it is possible to tune its backoff algorithm to achieve a protocol capacity very close to its theoretical bound. Unfortunately, in a real case, a station does not have an exact knowledge of the network and load configurations (i.e., number of active stations and length of the message transmitted on the channel) but it can only estimate it. In this work we analytically study the performance of the IEEE 802.11 protocol with a dynamically tuned backoff based on the estimation of the network status. Results obtained indicate that under stationary traffic and network configurations (i.e., constant average message length and fixed number of active stations), the capacity of the enhanced protocol approaches the theoretical limits in all the configurations analyzed. In addition, by exploiting the analytical model, we investigate the protocol performance in transient conditions (i.e., when the number of active stations sharply changes)."
]
}
|
1203.2970
|
2137030586
|
In 802.11 WLANs, adapting the contention parameters to network conditions results in substantial performance improvements. Even though the ability to change these parameters has been available in standard devices for years, so far no adaptive mechanism using this functionality has been validated in a realistic deployment. In this paper we report our experiences with implementing and evaluating two adaptive algorithms based on control theory, one centralized and one distributed, in a large-scale testbed consisting of 18 commercial off-the-shelf devices. We conduct extensive measurements, considering different network conditions in terms of number of active nodes, link qualities, and data traffic. We show that both algorithms significantly outperform the standard configuration in terms of total throughput. We also identify the limitations inherent in distributed schemes, and demonstrate that the centralized approach substantially improves performance under a large variety of scenarios, which confirms its suitability for real deployments.
|
Implementation experiences. Very few schemes to optimize WLAN performance have been developed in practice @cite_29 @cite_6 @cite_28 . While the idea behind Idle Sense @cite_19 is fairly simple, its implementation @cite_28 entails a significant level of complexity, introducing tight timing constraints that require programming at the firmware level. Nonetheless, the microcode that achieves the desired functionality is proprietary and thus subject to portability constraints. Similar limitations hold for the approach of @cite_6 , which introduces changes to the MAC protocol that require redesigning the whole NIC implementation. Further, this involves complex DSP and FPGA programming and demands non-negligible computational resources. Finally, the work of @cite_29 does not propose or evaluate any adaptive algorithm to adapt the @math but just evaluates the performance of static configurations. Additionally, all of these works rely on testbeds substantially smaller than ours.
|
{
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_29",
"@cite_6"
],
"mid": [
"2165682314",
"2104935654",
"2012321831",
"1988536525"
],
"abstract": [
"An overwhelming part of research work on wireless networks validates new concepts or protocols with simulation or analytical modeling. Unlike this approach, we present our experience with implementing the Idle Sense access method on programmable off-the-shelf hardware---the Intel IPW2915 abg chipset. We also present measurements and performance comparisons of Idle Sense with respect to the Intel implementation of the 802.11 DCF (Distributed Coordination Function) standard. Implementing a modified MAC protocol on constrained devices presents several challenges: difficulty of programming without support for multiplication, division, and floating point arithmetic, absence of support for debugging and high precision measurement. To achieve our objectives, we had to overcome the limitations of the hardware platform and solve several issues. In particular, we have implemented the adaptation algorithm with approximate values of control parameters without the division operation and taken advantage of some fields in data frames to trace the execution and test the implemented access method. Finally, we have measured its performance to confirm good properties of Idle Sense: it obtains slightly better throughput, much better fairness, and significantly lower collision rate compared to the Intel implementation of the 802.11 DCF standard.",
"We consider wireless LANs such as IEEE 802.11 operating in the unlicensed radio spectrum. While their nominal bit rates have increased considerably, the MAC layer remains practically unchanged despite much research effort spent on improving its performance. We observe that most proposals for tuning the access method focus on a single aspect and disregard others. Our objective is to define an access method optimized for throughput and fairness, able to dynamically adapt to physical channel conditions, to operate near optimum for a wide range of error rates, and to provide equal time shares when hosts use different bit rates.We propose a novel access method derived from 802.11 DCF [2] (Distributed Coordination Function) in which all hosts use similar values of the contention window CW to benefit from good short-term access fairness. We call our method Idle Sense, because each host observes the mean number of idle slots between transmission attempts to dynamically control its contention window. Unlike other proposals, Idle Sense enables each host to estimate its frame error rate, which can be used for switching to the right bit rate. We present simulations showing how the method leads to high throughput, low collision overhead, and low delay. The method also features fast reactivity and time-fair channel allocation.",
"We investigate the optimal selection of minimum contention window values to achieve proportional fairness in a multirate IEEE 802.11e test-bed. Unlike other approaches, the proposed model accounts for the contention-based nature of 802.11's MAC layer operation and considers the case where stations can have different weights corresponding to different throughput classes. Our test-bed evaluation considers both the long-term throughput achieved by wireless stations and the short-term fairness. When all stations have the same transmission rate, optimality is achieved when a station's throughput is proportional to its weight factor, and the optimal minimum contention windows also maximize the aggregate throughput. When stations have different transmission rates, the optimal minimum contention window for high rate stations is smaller than for low rate stations. Furthermore, we compare proportional fairness with time-based fairness, which can be achieved by adjusting packet sizes so that low and high rate stations have equal successful transmission times, or by adjusting the transmission opportunity (TXOP)limit so that high rate stations transmit multiple back-to-back packets and thus occupy the channel for the same time as low rate stations that transmit a single packet. The test-bed experiments show that when stations have different transmission rates and the same weight, proportional fairness achieves higher performance than the time-based fairness approaches, in terms of both aggregate utility and throughput.",
"Due to its extreme simplicity and flexibility, the IEEE 802.11 standard is the dominant technology to implement both infrastructure-based WLANs and single-hop ad hoc networks. In spite of its popularity, there is a vast literature demonstrating the shortcomings of using the 802.11 technology in such environments, such as dramatic degradation of network capacity as contention increases and vulnerability to external interferences. Therefore, the design of enhancements and optimizations for the original 802.11 MAC protocol has been a very active research area in the last years. However, all these modifications to the 802.11 MAC protocol were validated only through simulations and or analytical investigations. In this paper, we present a very unique work as we have designed a flexible hardware software platform, fully compatible with current implementations of the IEEE 802.11 technology, which we have used to concretely implement and test an enhanced 802.11 backoff algorithm. Our experimental results clearly show that the enhanced mechanism outperforms the standard 802.11 MAC protocol in real scenarios."
]
}
|
1203.2742
|
2030828780
|
Algorithms are presented for evaluating gradients and Hessians of logarithmic barrier functions for two types of convex cones: the cone of positive semidefinite matrices with a given sparsity pattern and its dual cone, the cone of sparse matrices with the same pattern that have a positive semidefinite completion. Efficient large-scale algorithms for evaluating these barriers and their derivatives are important in interior-point methods for nonsymmetric conic formulations of sparse semidefinite programs. The algorithms are based on the multifrontal method for sparse Cholesky factorization.
|
We refer to the algorithms in this paper as and because of their resemblance to multifrontal and supernodal multifrontal algorithms for Cholesky factorization @cite_19 @cite_35 . The multifrontal Cholesky factorization is reviewed in and a supernodal variant, formulated in terms of clique trees, is described in .
|
{
"cite_N": [
"@cite_19",
"@cite_35"
],
"mid": [
"2063675347",
"2031990962"
],
"abstract": [
"On etend la methode frontale pour resoudre des systemes lineaires d'equations en permettant a plus d'un front d'apparaitre en meme temps",
"This paper presents an overview of the multifrontal method for the solution of large sparse symmetric positive definite linear systems. The method is formulated in terms of frontal matrices, update matrices, and an assembly tree. Formal definitions of these notions are given based on the sparse matrix structure. Various advances to the basic method are surveyed. They include the role of matrix reorderings, the use of supernodes, and other implementatjon techniques. The use of the method in different computational environments is also described."
]
}
|
1203.2564
|
2950777931
|
A perturbative approach is used to derive approximations of arbitrary order to estimate high percentiles of sums of positive independent random variables that exhibit heavy tails. Closed-form expressions for the successive approximations are obtained both when the number of terms in the sum is deterministic and when it is random. The zeroth order approximation is the percentile of the maximum term in the sum. Higher orders in the perturbative series involve the right-truncated moments of the individual random variables that appear in the sum. These censored moments are always finite. As a result, and in contrast to previous approximations proposed in the literature, the perturbative series has the same form regardless of whether these random variables have a finite mean or not. The accuracy of the approximations is illustrated for a variety of distributions and a wide range of parameters. The quality of the estimate improves as more terms are included in the perturbative series, specially for higher percentiles and heavier tails.
|
In this section we review closed-form approximations for the percentile of sums of positive iidrvâ's that have been proposed in previous investigations. Even though it is possible to derive approximations for particular heavy-tailed distributions, such as @cite_11 for the Pareto distribution, in this work we consider comparisons only with approximations for general subexponential distributions @cite_33 @cite_18 . The single-loss approximation can be derived using first order asymptotics of the tail of sums of subexponential random variables @cite_24 @cite_30 @cite_6 . Higher order asymptotic expansions of the tails of the compound distribution @cite_19 @cite_2 @cite_13 @cite_31 @cite_40 @cite_35 can be used to obtain corrections to the single-loss approximation @cite_27 @cite_25 @cite_32 . These high order corrections are similar to the successive terms in the perturbative expansion analyzed in this article. However, there are some important differences. In particular, these terms are expressed as a function of right-censored moments, which are always finite. In the section on experimental evaluation (section ) we will further show that the perturbative series provides more accurate approximations than the expressions introduced in this section.
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_18",
"@cite_33",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_40",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"1976485684",
"",
"1514890731",
"",
"2048587146",
"2748998828",
"2117319970",
"2036263115",
"2064142180",
"",
"",
"2131168462",
"2026924428",
"",
"2134254925"
],
"abstract": [
"Abstract The present paper investigates, for the general Andersen model, the asymptotic behaviour of the probability of ruin function when the initial risk reserve tends to infinity. Whereas the exponential (Cramer) case is well understood, in the past, less attention has been paid to a systematic study of a model taking big claim sizes into account. We give a thorough treatment of the latter and also review previously known but mostly scattered results to show how they all follow from essentially one mathematical model.",
"",
"Preface.- Introduction.- Heavy- and long-tailed distributions.- Subexponential distributions.- Densities and local probabilities.- Maximum of random walks.- References.- Index",
"",
"Let X i (i=1,2, …) be a sequence of subexponential positive independent and identically distributed random variables. In this paper, we offer two alternative approaches to obtain higher-order expansions of the tail of and subsequently for ruin probabilities in renewal risk models with claim sizes X i . In particular, these emphasize the importance of the term for the accuracy of the resulting asymptotic expansion of . Furthermore, we present a more rigorous approach to the often suggested technique of using approximations with shifted arguments. The cases of a Pareto type, Weibull and Lognormal distribution for X i are discussed in more detail and numerical investigations of the increase in accuracy by including higher-order terms in the approximation of ruin probabilities for finite realistic ranges of s are given.",
"",
"Let @math be independent random positive variables and let @math , @math Let us denote [ P _1 + + _n < t = G_n (t). ]Theorem. [ t 1 - G_n (t) 1 - G(t) = n, n = 1,2,3, , ]if and only if [ t 1 - G_2 (t) 1 - G(t) = 2. ] This theorem is useful in some investigations of age-dependent branching processes.",
"Let G = [Sigma][infinity]n=0pnF*n denote the probability measure subordinate to F with subordinator Pn . We investigate the asymptotic behaviour of (1 - G(x))-([Sigma] npn)(1 - F(x)) as x --> [infinity] if 1 - F is regularly varying with index [varrho], 0",
"We derive an asymptotic expansion for the distribution of a compound sum of independent random variables, all having the same rapidly varying subexponential distribution. The examples of a Poisson and geometric number of summands serve as an illustration of the main result. Complete calculations are done for a Weibull distribution, with which we derive, as examples and without any difficulties, seven-term expansions. In this paper we construct asymptotic expansions for the tail area of a compound sum, when the distribution of the summands belongs to a class of rapidly varying subexponential distributions. To be more precise, let X?, i > 1, be a sequence of independent random variables, all having the same distribution, F. For any positive integer n the distribution of the partial sums Sn = X + + Xn is the n-fold convolution F*n. We set So = 0 and therefore F*? is defined as the distribution of the point mass at the origin. Let N be a nonnegative integer-valued random variable, independent of the X s. We consider the distribution G of the compound sum Sn, that is E F*N, and we seek an asymptotic expansion for its tail area G = 1 ? G. First-order asymptotic results for G have been obtained by (1979), Cline (1987), and Embrechts (1985). A second-order formula may be found in Griibel (1987) and Omey and Willekens (1987). Compound sums or subordinated distributions arise as distributions of interest in several stochastic models. In insurance risk theory, it models the total claim amount. For a discussion of issues related to random sums and insurance risk, we refer the reader to Embrechts et",
"",
"",
"In this paper we derive precise tail-area approximations for the sum of an arbitrary finite number of independent heavy-tailed random variables. In order to achieve second-order asymptotics, a mild regularity condition is imposed on the class of distribution functions with regularly varying tails. @PARASPLIT Higher-order asymptotics are also obtained when considering a semiparametric subclass of distribution functions with regularly varying tails. These semiparametric subclasses are shown to be closed under convolutions and a convolution algebra is constructed to evaluate the parameters of a convolution from the parameters of the constituent distributions in the convolution. A Maple code is presented which does this task.",
"where (an)n ENo is some sequence of nonnegative numbers, (Sn),nENo is the sequence of partial sums, S0 = 0, Sn = XflXk, of another sequence (Xk)kEN of i.i.d. random variables, and A c R is a fixed Borel set such as [0,1] or [0, oo). Examples of such convolution series are subordinated distributions (f=0Oan = 1) which arise as distributions of random sums, and harmonic and ordinary renewal measures (a0 = 0, an = 1 n for all n C N in the first, an = 1 for all n C NO in the second case). These examples are in turn essential for the analysis of the large time behaviour of diverse applied models such as branching and queueing processes, they are also of interest in connection with representation theorems such as the Levy representation of infinitely divisible distributions. A traditional approach to such problems is via regular variation: If the underlying random variables are nonnegative we can use Laplace transforms and the related Abelian and Tauberian theorems [see, e.g., Stam (1973) in the context of subordination and Feller (1971, XIV.3) in connection with renewal theory; Embrechts, Maejima, and Omey (1984) is a recent treatment of generalized renewal measures along these lines]. The approach of the present paper is based on the Wiener-Levy-Gel'fand theorem and has occasionally been called the Banach algebra method. In Gruibel (1983) we gave a new variant of this method for the special case of lattice distributions, showing that by using the appropriate Banach algebras of sequences, arbitrarily fine expansions are possible under certain assumptions on the higher-order differences of (P(X1 = n))fnEN. Here we give a corresponding treatment of nonlattice distributions. We restrict ourselves to an analogue of first-order differences and obtain a number of theorems which perhaps are described best as next-term results. To explain this let us consider a special case in more detail.",
"",
"Abstract : The Memorandum is an extension of certain analytic results obtained in AD-646 366, A Comparison of Average Likelihood and Maximum Likelihood Ratio Tests for Detecting Radar Targets of Unknown Doppler Frequency, L. E. Brennan, I. S. Reed, and W. Sollfrey, December 1966. It is shown that some asymptotic results derived in AD-646 366 are valid over a much broader range than originally indicated, providing a useful analytic tool for investigating more complicated radar detection problems involving signals of unknown frequency."
]
}
|
1203.2564
|
2950777931
|
A perturbative approach is used to derive approximations of arbitrary order to estimate high percentiles of sums of positive independent random variables that exhibit heavy tails. Closed-form expressions for the successive approximations are obtained both when the number of terms in the sum is deterministic and when it is random. The zeroth order approximation is the percentile of the maximum term in the sum. Higher orders in the perturbative series involve the right-truncated moments of the individual random variables that appear in the sum. These censored moments are always finite. As a result, and in contrast to previous approximations proposed in the literature, the perturbative series has the same form regardless of whether these random variables have a finite mean or not. The accuracy of the approximations is illustrated for a variety of distributions and a wide range of parameters. The quality of the estimate improves as more terms are included in the perturbative series, specially for higher percentiles and heavier tails.
|
Using heuristic arguments, a correction to the single-loss approximation was proposed in @cite_38 for distributions with finite mean In the limit @math , the value @math is large, so that @math and the approximation given by ) becomes similar to ).
|
{
"cite_N": [
"@cite_38"
],
"mid": [
"2746184391"
],
"abstract": [
"Making the assumption that the distribution of operational-loss severity has finite mean, Klaus Bocker and Jacob Sprittulla suggest a refined version of the analytical operational VAR theorem derived in Bocker and Kluppelberg (2005), which significantly reduces the approximation error to operational VAR."
]
}
|
1203.2564
|
2950777931
|
A perturbative approach is used to derive approximations of arbitrary order to estimate high percentiles of sums of positive independent random variables that exhibit heavy tails. Closed-form expressions for the successive approximations are obtained both when the number of terms in the sum is deterministic and when it is random. The zeroth order approximation is the percentile of the maximum term in the sum. Higher orders in the perturbative series involve the right-truncated moments of the individual random variables that appear in the sum. These censored moments are always finite. As a result, and in contrast to previous approximations proposed in the literature, the perturbative series has the same form regardless of whether these random variables have a finite mean or not. The accuracy of the approximations is illustrated for a variety of distributions and a wide range of parameters. The quality of the estimate improves as more terms are included in the perturbative series, specially for higher percentiles and heavier tails.
|
Besides the heuristic derivation given in @cite_38 and the perturbative expansion proposed in this work, higher order corrections to the single-loss approximation can be derived in at least three different ways: Using the second order asymptotic approximations introduced in @cite_19 @cite_2 @cite_27 @cite_25 , from the asymptotic expansion analyzed in @cite_31 @cite_40 @cite_35 or from asymptotic approximations based on evaluations of @math at different arguments @cite_32 .
|
{
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_32",
"@cite_19",
"@cite_27",
"@cite_40",
"@cite_2",
"@cite_31",
"@cite_25"
],
"mid": [
"2746184391",
"",
"2048587146",
"2036263115",
"",
"2064142180",
"",
"2131168462",
""
],
"abstract": [
"Making the assumption that the distribution of operational-loss severity has finite mean, Klaus Bocker and Jacob Sprittulla suggest a refined version of the analytical operational VAR theorem derived in Bocker and Kluppelberg (2005), which significantly reduces the approximation error to operational VAR.",
"",
"Let X i (i=1,2, …) be a sequence of subexponential positive independent and identically distributed random variables. In this paper, we offer two alternative approaches to obtain higher-order expansions of the tail of and subsequently for ruin probabilities in renewal risk models with claim sizes X i . In particular, these emphasize the importance of the term for the accuracy of the resulting asymptotic expansion of . Furthermore, we present a more rigorous approach to the often suggested technique of using approximations with shifted arguments. The cases of a Pareto type, Weibull and Lognormal distribution for X i are discussed in more detail and numerical investigations of the increase in accuracy by including higher-order terms in the approximation of ruin probabilities for finite realistic ranges of s are given.",
"Let G = [Sigma][infinity]n=0pnF*n denote the probability measure subordinate to F with subordinator Pn . We investigate the asymptotic behaviour of (1 - G(x))-([Sigma] npn)(1 - F(x)) as x --> [infinity] if 1 - F is regularly varying with index [varrho], 0",
"",
"We derive an asymptotic expansion for the distribution of a compound sum of independent random variables, all having the same rapidly varying subexponential distribution. The examples of a Poisson and geometric number of summands serve as an illustration of the main result. Complete calculations are done for a Weibull distribution, with which we derive, as examples and without any difficulties, seven-term expansions. In this paper we construct asymptotic expansions for the tail area of a compound sum, when the distribution of the summands belongs to a class of rapidly varying subexponential distributions. To be more precise, let X?, i > 1, be a sequence of independent random variables, all having the same distribution, F. For any positive integer n the distribution of the partial sums Sn = X + + Xn is the n-fold convolution F*n. We set So = 0 and therefore F*? is defined as the distribution of the point mass at the origin. Let N be a nonnegative integer-valued random variable, independent of the X s. We consider the distribution G of the compound sum Sn, that is E F*N, and we seek an asymptotic expansion for its tail area G = 1 ? G. First-order asymptotic results for G have been obtained by (1979), Cline (1987), and Embrechts (1985). A second-order formula may be found in Griibel (1987) and Omey and Willekens (1987). Compound sums or subordinated distributions arise as distributions of interest in several stochastic models. In insurance risk theory, it models the total claim amount. For a discussion of issues related to random sums and insurance risk, we refer the reader to Embrechts et",
"",
"In this paper we derive precise tail-area approximations for the sum of an arbitrary finite number of independent heavy-tailed random variables. In order to achieve second-order asymptotics, a mild regularity condition is imposed on the class of distribution functions with regularly varying tails. @PARASPLIT Higher-order asymptotics are also obtained when considering a semiparametric subclass of distribution functions with regularly varying tails. These semiparametric subclasses are shown to be closed under convolutions and a convolution algebra is constructed to evaluate the parameters of a convolution from the parameters of the constituent distributions in the convolution. A Maple code is presented which does this task.",
""
]
}
|
1203.3051
|
2949815161
|
We propose a simple method for combining together voting rules that performs a run-off between the different winners of each voting rule. We prove that this combinator has several good properties. For instance, even if just one of the base voting rules has a desirable property like Condorcet consistency, the combination inherits this property. In addition, we prove that combining voting rules together in this way can make finding a manipulation more computationally difficult. Finally, we study the impact of this combinator on approximation methods that find close to optimal manipulations.
|
Moved to here. Different ways of combining voting rules to make manipulation computationally hard have been investigated recently. Conitzer and Sandholm @cite_10 studied the impact on the computational complexity of manipulation of adding an initial round of the Cup rule to a voting rule. This initial round eliminates half the candidates and makes manipulation NP-hard to compute for several voting rule including plurality and Borda. Elkind and Lipmaa @cite_2 extended this idea to a general technique for combining two voting rules. The first voting rule is run for some number of rounds to eliminate some of the candidates, before the second voting rule is applied to the candidates that remain. They proved that many such combinations of voting rules are NP-hard to manipulate. Note that theirs is a sequential combinator, in which the two rules are run in sequence, whilst ours (as we will see soon) is a parallel combinator, in which the two rules are run in parallel. More recently, Walsh and Xia @cite_8 showed that using a lottery to eliminate some of the voters (instead of some of the candidates) is another mechanism to make manipulation intractable to compute.
|
{
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_2"
],
"mid": [
"",
"2950164662",
"1580922218"
],
"abstract": [
"",
"Voting is a general method for preference aggregation in multiagent settings, but seminal results have shown that all (nondictatorial) voting protocols are manipulable. One could try to avoid manipulation by using voting protocols where determining a beneficial manipulation is hard computationally. A number of recent papers study the complexity of manipulating existing protocols. This paper is the first work to take the next step of designing new protocols that are especially hard to manipulate. Rather than designing these new protocols from scratch, we instead show how to tweak existing protocols to make manipulation hard, while leaving much of the original nature of the protocol intact. The tweak studied consists of adding one elimination preround to the election. Surprisingly, this extremely simple and universal tweak makes typical protocols hard to manipulate! The protocols become NP-hard, #P-hard, or PSPACE-hard to manipulate, depending on whether the schedule of the preround is determined before the votes are collected, after the votes are collected, or the scheduling and the vote collecting are interleaved, respectively. We prove general sufficient conditions on the protocols for this tweak to introduce the hardness, and show that the most common voting protocols satisfy those conditions. These are the first results in voting settings where manipulation is in a higher complexity class than NP (presuming PSPACE @math NP).",
"This paper addresses the problem of constructing voting protocols that are hard to manipulate. We describe a general technique for obtaining a new protocol by combining two or more base protocols, and study the resulting class of (vote-once) hybrid voting protocols, which also includes most previously known manipulation-resistant protocols. We show that for many choices of underlying base protocols, including some that are easily manipulable, their hybrids are NP-hard to manipulate, and demonstrate that this method can be used to produce manipulation-resistant protocols with unique combinations of useful features."
]
}
|
1203.2395
|
2114871457
|
We prove that for @math there exists a compact subset @math of the closed ball in @math of radius @math , such that @math has Hausdorff dimension @math and does not symplectically embed into the standard open symplectic cylinder. The second main result is a lower bound on the @math -th regular coisotropic capacity, which is sharp up to a factor of 3. For an open subset of a geometrically bounded, aspherical symplectic manifold, this capacity is a lower bound on its displacement energy. The proofs of the results involve a certain Lagrangian submanifold of linear space, which was considered by M. Audin and L. Polterovich.
|
Gromov's non-squeezing result (cf. @cite_0 ) implies that @math . This can be strengthened to the equality @math , which follows from [Theorem 6] SZSmall . In the case @math we have @math . This is a consequence of the following result. The proof of this result is based on Moser isotopy. In contrast with this proposition, a straight-forward argument shows that @math . Hence in the case @math , the values @math are all known.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"1549793071"
],
"abstract": [
"Introduction Symplectic manifolds and lagrangian submanifolds, examples Lagrangian splittings, real and complex polarizations, Kahler manifolds Reduction, the calculus of canonical relations, intermediate polarizations Hamiltonian systems and group actions on symplectic manifolds Normal forms Lagrangian submanifolds and families of functions Intersection theory of lagrangian submanifolds Quantization on cotangent bundles Quantization and polarizations Quantizing lagrangian submanifolds and subspaces, construction of the Maslov bundle References."
]
}
|
1203.2395
|
2114871457
|
We prove that for @math there exists a compact subset @math of the closed ball in @math of radius @math , such that @math has Hausdorff dimension @math and does not symplectically embed into the standard open symplectic cylinder. The second main result is a lower bound on the @math -th regular coisotropic capacity, which is sharp up to a factor of 3. For an open subset of a geometrically bounded, aspherical symplectic manifold, this capacity is a lower bound on its displacement energy. The proofs of the results involve a certain Lagrangian submanifold of linear space, which was considered by M. Audin and L. Polterovich.
|
For @math the capacity @math is related to a definition recently introduced by H. Geiges and K. Zehmisch: In @cite_1 @cite_9 these authors defined, for any symplectic manifold @math , [c(V, ):= (M, ) ( ) , | , contact type embedding (M, ) (V, ) , ] where the supremum is taken over all closed contact manifolds @math , and @math denotes the infimum of all positive periods of closed orbits of the Reeb vector field @math . They showed that @math is a normalized symplectic capacity. (See [Theorem 4.5] GZWeinstein .)
|
{
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2096712060",
"2110335812"
],
"abstract": [
"We study holomorphic spheres in certain symplectic cobordisms and derive information about periodic Reeb orbits in the concave end of these cobordisms from the non-compactness of the relevant moduli spaces. We use this to confirm the strong Weinstein conjecture (predicting the existence of null-homologous Reeb links) for various higher-dimensional contact manifolds, including contact type hypersurfaces in subcritical Stein manifolds and in some cotangent bundles. The quantitative character of this result leads to the definition of a symplectic capacity.",
"We apply the method of filling with holomorphic discs to a 4-dimensional sym- plectic cobordism with the standard contact 3-sphere as one convex boundary component. We establish the following dichotomy: either the cobordism is diffeomorphic to a ball, or there is a periodic Reeb orbit of quantifiably short period in the concave boundary of the cobordism. This allows us to give a unified treatment of various results concerning Reeb dy- namics on contact 3-manifolds, symplectic fillability, the topology of symplectic cobordisms, symplectic nonsqueezing, and the nonexistence of exact Lagrangian surfaces in standard symplectic 4-space."
]
}
|
1203.2395
|
2114871457
|
We prove that for @math there exists a compact subset @math of the closed ball in @math of radius @math , such that @math has Hausdorff dimension @math and does not symplectically embed into the standard open symplectic cylinder. The second main result is a lower bound on the @math -th regular coisotropic capacity, which is sharp up to a factor of 3. For an open subset of a geometrically bounded, aspherical symplectic manifold, this capacity is a lower bound on its displacement energy. The proofs of the results involve a certain Lagrangian submanifold of linear space, which was considered by M. Audin and L. Polterovich.
|
As a consequence of Theorem and [Theorem 4] SZSmall , the value of the capacity @math on the ball @math lies between @math and @math . In the case @math this value can be exactly calculated, if we modify the definition of @math by restricting to Lagrangian submanifolds. Namely, the so obtained capacity @math satisfies [A_ ^+(B^4)= . ] To see this, we denote by @math the standard torus in @math . For every @math the rescaled torus @math is a Lagrangian submanifold of @math , with minimal area @math . It follows that @math . To see the opposite inequality, note that every orientable closed connected Lagrangian submanifold @math is diffeomorphic to the torus @math , since its Euler characteristic vanishes. For such an @math , K. Cieliebak and K. Mohnke proved @cite_3 that @math . The statement follows.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2083308781"
],
"abstract": [
"Let @math be a geometrically bounded symplectic manifold, @math be a closed, regular coisotropic submanifold, and @math be a Hamiltonian diffeomorphism. The main result of this article is that the number of leafwise fixed points of @math is bounded below by the sum of the Betti numbers of @math , provided that the Hofer distance between @math and the identity is small enough and the pair @math is non-degenerate. As an application, I prove a presymplectic non-embedding result. A version of the Arnold-Givental conjecture for coisotropic submanifolds is also discussed."
]
}
|
1203.2200
|
2951069220
|
To understand the structural dynamics of a large-scale social, biological or technological network, it may be useful to discover behavioral roles representing the main connectivity patterns present over time. In this paper, we propose a scalable non-parametric approach to automatically learn the structural dynamics of the network and individual nodes. Roles may represent structural or behavioral patterns such as the center of a star, peripheral nodes, or bridge nodes that connect different communities. Our novel approach learns the appropriate structural role dynamics for any arbitrary network and tracks the changes over time. In particular, we uncover the specific global network dynamics and the local node dynamics of a technological, communication, and social network. We identify interesting node and network patterns such as stationary and non-stationary roles, spikes steps in role-memberships (perhaps indicating anomalies), increasing decreasing role trends, among many others. Our results indicate that the nodes in each of these networks have distinct connectivity patterns that are non-stationary and evolve considerably over time. Overall, the experiments demonstrate the effectiveness of our approach for fast mining and tracking of the dynamics in large networks. Furthermore, the dynamic structural representation provides a basis for building more sophisticated models and tools that are fast for exploring large dynamic networks.
|
While there is a lot of work on dynamic graph patterns @cite_7 @cite_3 @cite_9 @cite_17 @cite_8 , temporal link prediction @cite_18 , anomaly detection @cite_4 , dynamic communities @cite_19 @cite_0 @cite_15 , and many others @cite_23 @cite_16 @cite_14 . No one has yet to propose a scalable role-based analysis framework for large time-varying networks. The closest work is that of @cite_2 @cite_24 where they develop the dMMSB model (based on a completely different process) for small graphs. Their model is capable of handling 1,000 nodes in approximately 1 day while our approach is linear in the number of edges and capable of handling 1,000 nodes in only a few minutes (practical for large real-world networks).
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"1864134408",
"1983685710",
"2106236875",
"1606050636",
"2155640700",
"1994473607",
"",
"2050153214",
"2007516075",
"2047309056",
"2112056172",
"2013661955",
"2145977038",
"2166711859",
"2158382689"
],
"abstract": [
"The data in many disciplines such as social networks, Web analysis, etc. is link-based, and the link structure can be exploited for many different data mining tasks. In this article, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links at time T + 1? If our data has underlying periodic structure, can we predict out even further in time, i.e., links at time T + 2, T + 3, etc.? In this article, we consider bipartite graphs that evolve over time and consider matrix- and tensor-based methods for predicting future links. We present a weight-based method for collapsing multiyear data into a single matrix. We show how the well-known Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural three-dimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix- and tensor-based techniques are effective for temporal link prediction despite the inherent difficulty of the problem. Additionally, we show that tensor-based techniques are particularly effective for temporal data with varying periodic patterns.",
"Event-based network data consists of sets of events over time, each of which may involve multiple entities. Examples include email traffic, telephone calls, and research publications (interpreted as co-authorship events). Traditional network analysis techniques, such as social network models, often aggregate the relational information from each event into a single static network. In contrast, in this paper we focus on the temporal nature of such data. In particular, we look at the problems of temporal link prediction and node ranking, and describe new methods that illustrate opportunities for data mining and machine learning techniques in this context. Experimental results are discussed for a large set of co-authorship events measured over multiple years, and a large corporate email data set spanning 21 months.",
"We address the problem of detecting characteristic patterns in communication networks. We introduce a scalable approach based on set-system discrepancy. By implicitly labeling each network edge with the sequence of times in which its two endpoints communicate, we view an entire communication network as a set-system. This view allows us to use combinatorial discrepancy as a mechanism to \"observe\" system behavior at different time scales. We illustrate our approach, called Discrepancy-based Novelty Detector (DND), on networks obtained from emails, blue tooth connections, IP traffic, and tweets. DND has almost linear runtime complexity and linear storage complexity in the number of communications. Examples of novel discrepancies that it detects are (i) asynchronous communications and (ii) disagreements in the firing rates of nodes and edges relative to the communication network as a whole.",
"How do blogs produce posts? What local, underlying mechanisms lead to the bursty temporal behaviors observed in blog networks? Earlier work analyzed network patterns of blogs and found that blog behavior is bursty and often follows power laws in both topological and temporal characteristics. However, no intuitive and realistic model has yet been introduced, that can lead to such patterns. This is exactly the focus of this work. We propose a generative model that uses simple and intuitive principles for each individual blog, and yet it is able to produce the temporal characteristics of the blogosphere together with global topological network patterns, like power-laws for degree distributions, for inter-posting times, and several more. Our model ZC uses a novel ‘zero-crossing’ approach based on a random walk, combined with other powerful ideas like exploration and exploitation. This makes it the first model to simultaneously model the topology and temporal dynamics of the blogosphere. We validate our model with experiments on a large collection of 45,000 blogs and 2.2 million posts.",
"How can we find communities in dynamic networks of socialinteractions, such as who calls whom, who emails whom, or who sells to whom? How can we spot discontinuity time-points in such streams of graphs, in an on-line, any-time fashion? We propose GraphScope, that addresses both problems, using information theoretic principles. Contrary to the majority of earlier methods, it needs no user-defined parameters. Moreover, it is designed to operate on large graphs, in a streaming fashion. We demonstrate the efficiency and effectiveness of our GraphScope on real datasets from several diverse domains. In all cases it produces meaningful time-evolving patterns that agree with human intuition.",
"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective.",
"",
"A multi-mode network typically consists of multiple heterogeneous social actors among which various types of interactions could occur. Identifying communities in a multi-mode network can help understand the structural properties of the network, address the data shortage and unbalanced problems, and assist tasks like targeted marketing and finding influential actors within or between groups. In general, a network and the membership of groups often evolve gradually. In a dynamic multi-mode network, both actor membership and interactions can evolve, which poses a challenging problem of identifying community evolution. In this work, we try to address this issue by employing the temporal information to analyze a multi-mode network. A spectral framework and its scalability issue are carefully studied. Experiments on both synthetic data and real-world large scale networks demonstrate the efficacy of our algorithm and suggest its generality in solving problems with complex relationships.",
"We discover communities from social network data and analyze the community evolution. These communities are inherent characteristics of human interaction in online social networks, as well as paper citation networks. Also, communities may evolve over time, due to changes to individuals' roles and social status in the network as well as changes to individuals' research interests. We present an innovative algorithm that deviates from the traditional two-step approach to analyze community evolutions. In the traditional approach, communities are first detected for each time slice, and then compared to determine correspondences. We argue that this approach is inappropriate in applications with noisy data. In this paper, we propose FacetNet for analyzing communities and their evolutions through a robust unified process. This novel framework will discover communities and capture their evolution with temporal smoothness given by historic community structures. Our approach relies on formulating the problem in terms of maximum a posteriori (MAP) estimation, where the community structure is estimated both by the observed networked data and by the prior distribution given by historic community structures. Then we develop an iterative algorithm, with proven low time complexity, which is guaranteed to converge to an optimal solution. We perform extensive experimental studies, on both synthetic datasets and real datasets, to demonstrate that our method discovers meaningful communities and provides additional insights not directly obtainable from traditional methods.",
"In a dynamic social or biological environment, the interactions between the actors can undergo large and systematic changes. In this paper we propose a model-based approach to analyze what we will refer to as the dynamic tomography of such time-evolving networks. Our approach offers an intuitive but powerful tool to infer the semantic underpinnings of each actor, such as its social roles or biological functions, underlying the observed network topologies. Our model builds on earlier work on a mixed membership stochastic blockmodel for static networks, and the state-space model for tracking object trajectory. It overcomes a major limitation of many current network inference techniques, which assume that each actor plays a unique and invariant role that accounts for all its interactions with other actors; instead, our method models the role of each actor as a time-evolving mixed membership vector that allows actors to behave differently over time and carry out different roles functions when interacting with different peers, which is closer to reality. We present an efficient algorithm for approximate inference and learning using our model; and we applied our model to analyze a social network between monks (i.e., the Sampson's network), a dynamic email communication network between the Enron employees, and a rewiring gene interaction network of fruit fly collected during its full life cycle. In all cases, our model reveals interesting patterns of the dynamic roles of the actors.",
"Online content exhibits rich temporal dynamics, and diverse realtime user generated content further intensifies this process. However, temporal patterns by which online content grows and fades over time, and by which different pieces of content compete for attention remain largely unexplored. We study temporal patterns associated with online content and how the content's popularity grows and fades over time. The attention that content receives on the Web varies depending on many factors and occurs on very different time scales and at different resolutions. In order to uncover the temporal dynamics of online content we formulate a time series clustering problem using a similarity metric that is invariant to scaling and shifting. We develop the K-Spectral Centroid (K-SC) clustering algorithm that effectively finds cluster centroids with our similarity measure. By applying an adaptive wavelet-based incremental approach to clustering, we scale K-SC to large data sets. We demonstrate our approach on two massive datasets: a set of 580 million Tweets, and a set of 170 million blog posts and news media articles. We find that K-SC outperforms the K-means clustering algorithm in finding distinct shapes of time series. Our analysis shows that there are six main temporal shapes of attention of online content. We also present a simple model that reliably predicts the shape of attention by using information about only a small number of participants. Our analyses offer insight into common temporal patterns of the content on theWeb and broaden the understanding of the dynamics of human attention.",
"In a dynamic social or biological environment, interactions between the underlying actors can undergo large and systematic changes. Each actor can assume multiple roles and their degrees of affiliation to these roles can also exhibit rich temporal phenomena. We propose a state space mixed membership stochastic blockmodel which can track across time the evolving roles of the actors. We also derive an efficient variational inference procedure for our model, and apply it to the Enron email networks, and rewiring gene regulatory networks of yeast. In both cases, our model reveals interesting dynamical roles of the actors.",
"Real-world social networks from a variety of domains can naturally be modelled as dynamic graphs. However, approaches to detecting communities have largely focused on identifying communities in static graphs. Recently, researchers have begun to consider the problem of tracking the evolution of groups of users in dynamic scenarios. Here we describe a model for tracking the progress of communities over time in a dynamic network, where each community is characterised by a series of significant evolutionary events. This model is used to motivate a community-matching strategy for efficiently identifying and tracking dynamic communities. Evaluations on synthetic graphs containing embedded events demonstrate that this strategy can successfully track communities over time in volatile networks. In addition, we describe experiments exploring the dynamic communities detected in a real mobile operator network containing millions of users.",
"Social interactions are conduits for various processes spreading through a population, from rumors and opinions to behaviors and diseases. In the context of the spread of a disease or undesirable behavior, it is important to identify blockers: individuals that are most effective in stopping or slowing down the spread of a process through the population. This problem has so far resisted systematic algorithmic solutions. In an effort to formulate practical solutions, in this paper we ask: Are there structural network measures that are indicative of the best blockers in dynamic social networks? Our contribution is two-fold. First, we extend standard structural network measures to dynamic networks. Second, we compare the blocking ability of individuals in the order of ranking by the new dynamic measures. We found that overall, simple ranking according to a node's static degree, or the dynamic version of a node's degree, performed consistently well. Surprisingly the dynamic clustering coefficient seems to be a good indicator, while its static version performs worse than the random ranking. This provides simple practical and locally computable algorithms for identifying key blockers in a network.",
"In this paper, we introduce SPIRIT (Streaming Pattern dIscoveRy in multIple Time-series). Given n numerical data streams, all of whose values we observe at each time tick t, SPIRIT can incrementally find correlations and hidden variables, which summarise the key trends in the entire stream collection. It can do this quickly, with no buffering of stream values and without comparing pairs of streams. Moreover, it is any-time, single pass, and it dynamically detects changes. The discovered trends can also be used to immediately spot potential anomalies, to do efficient forecasting and, more generally, to dramatically simplify further data processing. Our experimental evaluation and case studies show that SPIRIT can incrementally capture correlations and discover trends, efficiently and effectively."
]
}
|
1203.1521
|
2079094895
|
Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery-least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.
|
A similar concept of this paper is presented in @cite_16 , where the authors also establish a oracle-order performance guarantee for CoSaMP, SP, and IHT algorithms. Based on RIP, the analysis considers the recovery of a @math -sparse signal with the assumption that the measurement vector is corrupted by additive random white Gaussian noise. The main result of @cite_16 is stated as follows, where some notations are replaced for the sake of consistency to our work.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2039851473"
],
"abstract": [
"This correspondence presents an average case denoising performance analysis for SP, CoSaMP, and IHT algorithms. This analysis considers the recovery of a noisy signal, with the assumptions that it is corrupted by an additive random zero-mean white Gaussian noise and has a K-sparse representation with respect to a known dictionary D . The proposed analysis is based on the RIP, establishing a near-oracle performance guarantee for each of these algorithms. Beyond bounds for the reconstruction error that hold with high probability, in this work we also provide a bound for the average error."
]
}
|
1203.1521
|
2079094895
|
Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery-least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.
|
In other relevant works @cite_4 @cite_33 , the error bounds of CoSaMP and SP are also derived under general perturbations. The analysis is performed following the steps of @cite_3 for basis pursuit. Define The main result in @cite_4 states that under the conditions that and the recovered solution of CoSaMP satisfies Further define where The main result in @cite_33 states that under the conditions that and that the recovered solution of SP satisfies
|
{
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_3"
],
"mid": [
"1983749341",
"1561053739",
"2156377127"
],
"abstract": [
"Applications of compressed sensing motivate the possibility of using different operators to encode and decode a signal of interest. Since it is clear that the operators cannot be too different, we can view the discrepancy between the two matrices as a perturbation. The stability of l 1 -minimization and greedy algorithms to recover the signal in the presence of additive noise is by now well-known. Recently however, work has been done to analyze these methods with noise in the measurement matrix, which generates a multiplicative noise term. This new framework of generalized perturbations (i.e., both additive and multiplicative noise) extends the prior work on stable signal recovery from incomplete and inaccurate measurements of Candes, Romberg and Tao using Basis Pursuit (BP), and of Needell and Tropp using Compressive Sampling Matching Pursuit (CoSaMP). We show, under reasonable assumptions, that the stability of the reconstructed signal by both BP and CoSaMP is limited by the noise level in the observation. Our analysis extends easily to arbitrary greedy methods.",
"In this paper, the Subspace Pursuit (SP) recovery of signals with sensing matrix perturbations is analyzed. Previous studies have only considered the robustness of Basis pursuit and greedy algorithms to recover the signal in the presence of additive noise with measurement and or signal. Since it is impractical to exactly implement the sampling matrix A in a physical sensor, precision errors must be considered. Recently, work has been done to analyze the methods with noise in the sampling matrix, which generates a multiplicative noise term. This new perturbed framework (both additive and multiplicative noise) extends the prior work of Basis pursuit and greedy algorithms on stable signal recovery from incomplete and inaccurate measurements. Our works show that, under reasonable conditions, the stability of the SP solution of the completely perturbed scenario was limited by the total noise in the observation.",
"We analyze the Basis Pursuit recovery of signals with general perturbations. Previous studies have only considered partially perturbed observations Ax + e. Here, x is a signal which we wish to recover, A is a full-rank matrix with more columns than rows, and e is simple additive noise. Our model also incorporates perturbations E to the matrix A which result in multiplicative noise. This completely perturbed framework extends the prior work of Candes, Romberg, and Tao on stable signal recovery from incomplete and inaccurate measurements. Our results show that, under suitable conditions, the stability of the recovered signal is limited by the noise level in the observation. Moreover, this accuracy is within a constant multiple of the best-case reconstruction using the technique of least squares. In the absence of additive noise, numerical simulations essentially confirm that this error is a linear function of the relative perturbation."
]
}
|
1203.1521
|
2079094895
|
Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery-least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracle-order recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.
|
The main difference between @cite_16 and our work is that we consider a more general completely perturbed scenario, and the optimality of the recovery performance is also in this sense. Compared with @cite_4 @cite_33 , the demand of RICs in our work is for the perturbed matrix @math other than @math , which is due to the fact that only @math is available for recovery. Also, the demand of RICs is with a constant parameter here. In addition, the condition such as ) or ) is not required in our assumption. Our results are compared with oracle recovery and shown to be optimal up to the coefficients, and they are verified by plentiful numerical simulations.
|
{
"cite_N": [
"@cite_16",
"@cite_4",
"@cite_33"
],
"mid": [
"2039851473",
"1983749341",
"1561053739"
],
"abstract": [
"This correspondence presents an average case denoising performance analysis for SP, CoSaMP, and IHT algorithms. This analysis considers the recovery of a noisy signal, with the assumptions that it is corrupted by an additive random zero-mean white Gaussian noise and has a K-sparse representation with respect to a known dictionary D . The proposed analysis is based on the RIP, establishing a near-oracle performance guarantee for each of these algorithms. Beyond bounds for the reconstruction error that hold with high probability, in this work we also provide a bound for the average error.",
"Applications of compressed sensing motivate the possibility of using different operators to encode and decode a signal of interest. Since it is clear that the operators cannot be too different, we can view the discrepancy between the two matrices as a perturbation. The stability of l 1 -minimization and greedy algorithms to recover the signal in the presence of additive noise is by now well-known. Recently however, work has been done to analyze these methods with noise in the measurement matrix, which generates a multiplicative noise term. This new framework of generalized perturbations (i.e., both additive and multiplicative noise) extends the prior work on stable signal recovery from incomplete and inaccurate measurements of Candes, Romberg and Tao using Basis Pursuit (BP), and of Needell and Tropp using Compressive Sampling Matching Pursuit (CoSaMP). We show, under reasonable assumptions, that the stability of the reconstructed signal by both BP and CoSaMP is limited by the noise level in the observation. Our analysis extends easily to arbitrary greedy methods.",
"In this paper, the Subspace Pursuit (SP) recovery of signals with sensing matrix perturbations is analyzed. Previous studies have only considered the robustness of Basis pursuit and greedy algorithms to recover the signal in the presence of additive noise with measurement and or signal. Since it is impractical to exactly implement the sampling matrix A in a physical sensor, precision errors must be considered. Recently, work has been done to analyze the methods with noise in the sampling matrix, which generates a multiplicative noise term. This new perturbed framework (both additive and multiplicative noise) extends the prior work of Basis pursuit and greedy algorithms on stable signal recovery from incomplete and inaccurate measurements. Our works show that, under reasonable conditions, the stability of the SP solution of the completely perturbed scenario was limited by the total noise in the observation."
]
}
|
1203.1392
|
2951037738
|
Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24 , and an average reduction in memory requirements of 95 . In fact, our system achieves optimal memory consumption in some programs.
|
In this section, we only mention the most important and most related papers. It is not our intention to give a detailed overview of the research on RBMM for other programming paradigms. An in-depth review of RBMM research for functional programming can be found in @cite_10 .
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2033540531"
],
"abstract": [
"We report on our experience with designing, implementing, proving correct, and evaluating a region-based memory management system."
]
}
|
1203.1392
|
2951037738
|
Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24 , and an average reduction in memory requirements of 95 . In fact, our system achieves optimal memory consumption in some programs.
|
While also used a stack in their inference algorithm, they nevertheless thought that forcing stack discipline on the lifetimes of regions is too strict @cite_13 , and they decoupled region creation and removal, allowing regions to have arbitrarily overlapped lifetimes. Going even further in this direction, Henglein, Makholm, and Niss in @cite_0 proposed an imperative sublanguage on regions. In their system, regions are allowed not only to have arbitrary lifetimes but also to change their bindings. Their regions also contain reference counters that can give their system more flexibility in controlling their lifetimes. The most complete functional programming system with RBMM is the MLKit @cite_12 , which manages storage solely by RBMM. This system, while still using stack discipline for the lifetimes of regions, supports both resetting regions to zero size and runtime garbage collection within regions. Its performance is competitive with other state-of-the-art SML compilers.
|
{
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_12"
],
"mid": [
"2074003683",
"2040070287",
""
],
"abstract": [
"Region-based memory management can be used to control dynamic memory allocations and deallocations safely and efficiently. Existing (direct-style) region systems that statically guarantee region safety---no dereferencing of dangling pointers---are based on refinements of Tofte and Talpin's seminal work on region inference for managing heap memory in stacks of regions.We present a unified Floyd-Hoare Logic inspired region type system for reasoning about and inferring region-based memory management, using a sublanguage of imperative region commands. Our system expresses and performs control-sensitive region management without requiring a stack discipline for allocating and deallocating regions. Furthermore, it captures storage mode analysis and late allocation early deallocation analysis in a single, expressive, unified logical framework. Explicit region aliasing in combination with reference-counted regions provides flexible, context-sensitive early memory deallocation and simultaneously dispenses with the need for an integrated region alias analysis.In this paper we present the design of our region type system, illustrate its practical expressiveness, compare it to existing region analyses, demonstrate how this eliminates the need for previously required source code rewritings for good memory performance, and describe automatic inference of region commands that give consistently better (or at least equally good) memory performance as existing inference techniques.",
"Static memory management replaces runtime garbage collection with compile-time annotations that make all memory allocation and deallocation explicit in a program. We improve upon the Tofte Talpin region-based scheme for compile-time memory management[TT94]. In the Tofte Talpin approach, all values, including closures, are stored in regions. Region lifetimes coincide with lexical scope, thus forming a runtime stack of regions and eliminating the need for garbage collection. We relax the requirement that region lifetimes be lexical. Rather, regions are allocated late and deallocated as early as possible by explicit memory operations. The placement of allocation and deallocation annotations is determined by solving a system of constraints that expresses all possible annotations. Experiments show that our approach reduces memory requirements significantly, in some cases asymptotically.",
""
]
}
|
1203.1392
|
2951037738
|
Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24 , and an average reduction in memory requirements of 95 . In fact, our system achieves optimal memory consumption in some programs.
|
Another difference between the two systems that is likely to be more important in practice is that the liveness information we derive in allows interprocedural creation of regions, something that was not handled in @cite_27 . This can give finer lifetimes to regions, which can result in better memory reuse in certain situations. For example, for a region like R1 in p in Figure , the system in @cite_27 would force R1 to be live throughout p . If we had replaced the atom at (4) with a recursive call to p (such as p(A - 1, B) ) their system would build up all the temporary memory allocated at (1) in R1 .
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2035974062"
],
"abstract": [
"This paper presents a region analysis and transformation framework for Java programs. Given an input Java program, the compiler automatically translates it into an equivalent output program with region-based memory management. The generated program contains statements for creating regions, allocating objects in regions, removing regions, and passing regions as parameters. As a particular case, the analysis can enable the allocation of objects on the stack. Our algorithm uses a flow-insensitive and context-sensitive points-to analysis to partition the memory of the program into regions and to identify points-to relations between regions. It then performs a flow-sensitive, inter-procedural region liveness analysis to identify object lifetimes. Finally, it uses the computed region information to produce the region annotations in the output program. Our results indicate that, for several of our benchmarks, the transformation can allocate most of the data on stack or in short-lived regions, and can yield substantial memory savings."
]
}
|
1203.1392
|
2951037738
|
Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24 , and an average reduction in memory requirements of 95 . In fact, our system achieves optimal memory consumption in some programs.
|
Note that using graphs to model storage is not at all new in research about heap structures @cite_1 @cite_26 . Our graphs share many features with annotated types where the annotation on each type constructor is a location or region; see e.g. @cite_19 @cite_7 . Baker in @cite_19 and many others pointed out that such annotated types can also give information about sharing, very similar to the concept of in this paper.
|
{
"cite_N": [
"@cite_19",
"@cite_26",
"@cite_1",
"@cite_7"
],
"mid": [
"2075872176",
"2131135493",
"1975914482",
""
],
"abstract": [
"Type inference is the process by which an expression in an untyped computer language such as the lambda-calculus, Lisp, or a functional language can be assigned a static data type in order to improve the code generated by a compiler. Storage use inference is the process by which a program in a computer language can be statically analyzed to model its run-time behavior, particularly the containment and sharing relations among its run-time data structures. The information generated by storage use information can also be used to improve the code generated by a compiler, because knowledge of the containment and sharing relations of run-time data structures allows for methods of storage allocation and deallocation which are cheaper than garbage-collected heap storage and allows for the in-place updating of functional aggregates. Type inference and storage use inference have traditionally been considered orthogonal processes, with separate traditions and literature. However, we show in this paper than this separation may be a mistake, because the best-known and best-understood of the type inferencing algorithms—Milner's unification method for ML—already generates valuable sharing and containment information which is then unfortunately discarded. We show that this sharing information is already generated by standard unification algorithms with no additional overhead during unification; however, there is some additional work necessary to extract this information. We have not yet precisely characterized the resolving power of this sharing and containment information, but we believe that it is similar to that generated by researchers using other techniques. However, our scheme seems to only work for functional languages like pure Lisp. The unification of type and storage inferencing yields new insights into the meaning of “aggregate type”, which should prove valuable in the design of future type systems.",
"We present an interprocedural flow-insensitive points-to analysis based on type inference methods with an almost linear time cost complexity To our knowledge, this is the asymptotically fastest non-trivial interprocedural points-to analysis algorithm yet described The algorithm is based on a non-standard type system. The type inferred for any variable represents a set of locations and includes a type which in turn represents a set of locations possibly pointed to by the variable. The type inferred for a function variable represents a set of functions It may point to and includes a type signature for these functions The results are equivalent to those of a flow-insensitive alias analysis (and control flow analysis) that assumes alias relations are reflexive and transitive.This work makes three contributions. The first is a type system for describing a universally valid storage shape graph for a program in linear space. The second is a constraint system which often leads to better results than the \"obvious\" constraint system for the given type system The third is an almost linear time algorithm for points-to analysis by solving a constraint system.",
"Compilers can make good use of knowledge about the shape of data structures and the values that pointers assume during execution. We present results which show how a compiler can automatically determine some of this information. We believe that practical analyses based on this work can be used in compilers for languages that provide linked data structures. The analysis we present obtains useful information about linked data structures. We summarize unbounded data structures by taking advantage of structure present in the original program. The worst-case time bounds for a naive algorithm are high-degree polynomial, but for the expected (sparse) case we have an efficient algorithm. Previous work has addressed time bounds rarely, and efficient algorithms not at all. The quality of information obtained by this analysis appears to be (generally) an improvement on what is obtained by existing techniques. A simple extension obtains aliasing information for entire data structures that previously was obtained only through declarations. Previous work has shown that this information, however obtained, allows worthwhile optimization.",
""
]
}
|
1203.1392
|
2951037738
|
Region-based memory management (RBMM) is a form of compile time memory management, well-known from the functional programming world. In this paper we describe our work on implementing RBMM for the logic programming language Mercury. One interesting point about Mercury is that it is designed with strong type, mode, and determinism systems. These systems not only provide Mercury programmers with several direct software engineering benefits, such as self-documenting code and clear program logic, but also give language implementors a large amount of information that is useful for program analyses. In this work, we make use of this information to develop program analyses that determine the distribution of data into regions and transform Mercury programs by inserting into them the necessary region operations. We prove the correctness of our program analyses and transformation. To execute the annotated programs, we have implemented runtime support that tackles the two main challenges posed by backtracking. First, backtracking can require regions removed during forward execution to be "resurrected"; and second, any memory allocated during a computation that has been backtracked over must be recovered promptly and without waiting for the regions involved to come to the end of their life. We describe in detail our solution of both these problems. We study in detail how our RBMM system performs on a selection of benchmark programs, including some well-known difficult cases for RBMM. Even with these difficult cases, our RBMM-enabled Mercury system obtains clearly faster runtimes for 15 out of 18 benchmarks compared to the base Mercury system with its Boehm runtime garbage collector, with an average runtime speedup of 24 , and an average reduction in memory requirements of 95 . In fact, our system achieves optimal memory consumption in some programs.
|
The first application of RBMM to logic programming was the work of Makholm for Prolog, described in @cite_2 and @cite_18 . He realized that backtracking can be handled completely by runtime support, which can keep the region inference simple. However, the Prolog system he used was not based on the usual implementation technology for Prolog, the Warren Abstract Machine or WAM. This shortcoming was fixed in @cite_3 where Makholm and Sagonas extended the WAM to enable region-based memory management. The main differences between their work and ours are that Mercury supports if-then-elses with conditions that can succeed more than once, and the Mercury implementation generates specialized code for many situations that Prolog handles with a more general mechanism. (For example, Mercury has separate implementations for nondet disjunctions and for semidet disjunctions.) The first difference required new algorithms, while the second posed a tough engineering challenge in keeping overheads down, since due to Mercury's higher speed, any given overhead would hurt Mercury more than Prolog.
|
{
"cite_N": [
"@cite_18",
"@cite_3",
"@cite_2"
],
"mid": [
"2141871141",
"1710751537",
"73226156"
],
"abstract": [
"We extend Tofte and Talpin's region-based model for memory management to support backtracking and cuts, which makes it suitable for use with Prolog and other logic programming languages. We describe how the extended model can be implemented and report on the performance of a prototype implementation. The prototype implementation performs well when compared to a garbage-collecting Prolog implementation using comparable technology for non-memory-management issues.",
"Region-based memory management is an attractive alternative to garbage collection. It relies on a compile-time analysis to annotate the program with explicit allocation and deallocation instructions, where lifetimes of memory objects are grouped together in regions. This paper investigates how to adapt the runtime part of region-based memory management to the WAM setting.We present additions to the memory architecture and instruction set of the WAM that are necessary to implement regions. We extend an optimized WAM-based Prolog implementation with a region-based memory manager which supports backtracking with instant reclamation, and cuts. The performance of region-based execution is compared with that of the baseline garbage-collected implementation on several benchmark programs. A region-enabled WAM performs competitively and often results in time and or space improvements.",
""
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
The DARP is characterized by multiple objectives such as the maximization of the number of costumers served, the minimization of the number of vehicles used and the maximization of the level of service provided on average to the costumer (costumer waiting time, total time spent in vehicles, difference between actual and desired drop-off times). The DARP can be formulated as multiobjective mixed integer program. Exact algorithms for the single-vehicle DARP have been developed in @cite_47 @cite_38 . Recently, a branch-and-cut algorithm has been proposed in @cite_87 . Heuristics and meta-heuristics are proposed for dealing with the dynamic problem with time-dependent network @cite_50 . For a recent overview of the DARP, see @cite_61 .
|
{
"cite_N": [
"@cite_61",
"@cite_38",
"@cite_87",
"@cite_50",
"@cite_47"
],
"mid": [
"2017383000",
"",
"2051358678",
"2008806816",
"2037992565"
],
"abstract": [
"The Dial-a-Ride Problem (DARP) consists of designing vehicle routes and schedules for n users who specify pick-up and drop-off requests between origins and destinations. The aim is to plan a set of m minimum cost vehicle routes capable of accommodating as many users as possible, under a set of constraints. The most common example arises in door-to-door transportation for elderly or disabled people. The purpose of this article is to review the scientific literature on the DARP. The main features of the problem are described and classified and some modeling issues are discussed. A summary of the most important algorithms is provided.",
"",
"In the dial-a-ride problem, users formulate requests for transportation from a specific origin to a specific destination. Transportation is carried out by vehicles providing a shared service. The problem consists of designing a set of minimum-cost vehicle routes satisfying capacity, duration, time window, pairing, precedence, and ride-time constraints. This paper introduces a mixed-integer programming formulation of the problem and a branch-and-cut algorithm. The algorithm uses new valid inequalities for the dial-a-ride problem as well as known valid inequalities for the traveling salesman, the vehicle routing, and the pick-up and delivery problems. Computational experiments performed on randomly generated instances show that the proposed approach can be used to solve small to medium-size instances.",
"Abstract In real-time fleet management, vehicle routes are built in an on-going fashion as vehicle locations, travel times and customer requests are revealed over the planning horizon. To deal with such problems, a new generation of fast on-line algorithms capable of taking into account uncertainty is required. Although several articles on this topic have been published, the literature on real-time vehicle routing is still disorganized. In this paper the research in this field is reviewed and some issues that have not received attention so far are highlighted. A particular emphasis is put on parallel computing strategies.",
"SYNOPTIC ABSTRACTThe single-vehicle dial-a-ride problem with time window constraints for both pick-up and delivery locations, and precedence and capacity constraints, is solved using a forward dynamic programming algorithm. The total distance is minimized. The development of criteria for the elimination of infeasible states results in solution times which increase linearly with problem size."
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
In @cite_54 , the authors propose a heuristic two-phase solution procedure for the dial-a-ride problem with two objectives. The first phase consists of an iterated variable neighborhood search-based heuristic, generating approximate weighted sum solutions and the second phase is a path relinking module, computing additional efficient solutions.
|
{
"cite_N": [
"@cite_54"
],
"mid": [
"2074422713"
],
"abstract": [
"In this article, we develop a heuristic two-phase solution procedure for the dial-a-ride problem with two objectives. Besides the minimum cost objective a client centered objective has been defined. Phase one consists of an iterated variable neighborhood search-based heuristic, generating approximate weighted sum solutions; phase two is a path relinking module, computing additional efficient solutions. Results for two sets of benchmark instances are reported. For the smaller instances, exact efficient sets are generated by means of the e-constraint method. Comparison shows that the proposed two-phase method is able to generate high-quality approximations of the true Pareto frontier. © 2009 Wiley Periodicals, Inc. NETWORKS, 2009"
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
A comprehensive survey on the PDVRP can be found in @cite_19 where different variants of the problem, models and resolution methods are presented. As our knowledge, no work in the literature treat real world pick-up and delivery problem in the context of recuperation of waste for recycling. Some applications of this problem can be (1) The door-to-door delivery of mineral water bottles and the simultaneous collection of empty bottles, (2) The laundry service for hotels (collecting dirty clothes and delivering clean clothes), and (3) Medical waste.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"1525106674"
],
"abstract": [
"This paper is the second part of a comprehensive survey on routing problems involving pickups and deliveries. Basically, two problem classes can be distinguished. The first part dealt with the transportation of goods from the depot to linehaul customers and from backhaul customers to the depot. The second part now considers all those problems where goods are transported between pickup and delivery locations, denoted as Vehicle Routing Problems with Pickups and Deliveries (VRPPD). These are the Pickup and Delivery Vehicle Routing Problem (PDVRP – unpaired pickup and delivery points), the classical Pickup and Delivery Problem (PDP – paired pickup and delivery points), and the Dial-A-Ride Problem (DARP – passenger transportation between paired pickup and delivery points and user inconvenience taken into consideration). Single as well as multi vehicle mathematical problem formulations for all three VRPPD types are given, and the respective exact, heuristic, and metaheuristic solution methods are discussed."
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
This problem has a horizon @math , and there is a frequency for each customer stating how often within this @math period this customer must be visited. A solution to the problem consists of @math sets of routes that jointly satisfy the demand constraints and the frequency constraints @cite_3 .
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"1974123359"
],
"abstract": [
"Abstract In this paper, we study an extension of the PVRP where the vehicles can renew their capacity at some intermediate facilities. Each vehicle returns to the depot only when its work shift is over. For this problem we propose a tabu search (TS) algorithm and present computational results on a set of randomly generated instances and on a set of PVRP instances taken from the literature."
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
In this problem @cite_95 , a fleet of vehicles, all of them located at a central depot and with a known capacity, must serve a set of streets network, with minimum total cost such that the load assigned to each vehicle does not exceed its capacity.
|
{
"cite_N": [
"@cite_95"
],
"mid": [
"1591506270"
],
"abstract": [
"Preface. Contributing Authors. 1. A Historical Perspective on Arc Routing H.A. Eiselt, G. Laporte. Part I. Theory: 2. Traversing Graphs: The Eulerian and Hamiltonian Theme h. Fleischner. 3. Matching: Arc Routing and Solution Connection U. Derigs. 4. Arc Routing: Complexity and Approximability M. Dror. 5. Chinese Postman and Euler Tour Problems in Bi-directed Graphs E.L.Johnson. Part II. Solutions: 6. Polyhedral Theory for Arc Routing Problems R.W. Eglese, A.N. Letchford. 7. Linear Programming Based Methods for Solving Arc Routing Problems E. Benavent, et al 8. Transformations and Exact Node Routing Solutions by Column Generation M. Dror, A. Langevin. 9. Heuristic Algorithms A. Hertz, M. Mittaz. Part III. Applications: 10. Roadway Snow and Ice Control J.F. Campbell, A. Langevin. 11. Scheduling of Local Delivery Carrier Routes for the United States Postal Service L. Bodin, L. Levy. 12. Livestock Feed Distribution and Arc Traversal Problems M. Dror, et al"
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
In @cite_27 , the authors consider a multiobjective multimodal multicommodity flow problem with time windows and piecewise linear concave cost functions. Based on Lagrangian relaxation technique, the problem is broken into a set of smaller and easier subproblems and the subgradient optimization procedure is applied to solve the Lagrangian multipliers problem. Authors in @cite_84 proposed an origin-destination integer multi-commodity flow formulation with non-convex piecewise linear costs and use column generation based heuristic that provides both lower bounds and good quality feasible solutions. The author in @cite_1 deals with two objectives: the cost and the risk and develops a chance-constrained goal programming method to solve the problem.
|
{
"cite_N": [
"@cite_27",
"@cite_1",
"@cite_84"
],
"mid": [
"2091096614",
"2003392801",
"2186282916"
],
"abstract": [
"This study focuses on one of the intermodal operational issues: how to select best routes for shipments through the international intermodal network. International intermodal routing is complicated by three important characteristics: (1) multiple objectives; (2) scheduled transportation modes and demanded delivery times; and (3) transportation economies of scale. In this paper, the international intermodal routing problem is formulated as a multiobjective multimodal multicommodity flow problem (MMMFP) with time windows and concave costs. The objectives of this paper are to develop a mathematical model encompassing all three essential characteristics, and to propose an algorithm that can effectively provide answers to the model. The problem is NP-hard. It follows that the proposed algorithm is a heuristic. Based on relaxation and decomposition techniques, the original problem is broken into a set of smaller and easier subproblems. The case studies show that it is important to incorporate the three characteristics into the international intermodal routing problem, and our proposed algorithm can effectively and efficiently solve the MMMFP with time windows and concave costs.",
"Abstract Over the years, an increasing interdependence of the world economy has led to the considerable growth of international trade. Due to the lengthy distribution channel, international trade is often characterized by intermodal shipment which moves products across national boundaries via more than one mode of transportation. Consequently, the intermodal choice is of vital importance to the success of international trade. The intermodal choice, however, has never been a simple matter for any distribution manager because it can be affected by the multitude of conflicting factors such as cost, on-time service, and risk. This article develops a chance-constrained goal programming model to aid the distribution manager in choosing the most effective intermodal mix that not only minimizes cost and risk, but also satisfies various on-time service requirements.",
"This paper studies a routing problem in a multimodal network with shipment consolidation options. A freight forwarder can use a mix of flexible-time and scheduled transportation services. Time windows are a prominent aspect of the problem. For instance, they are used to model opening hours of the terminals, as well as pickup and delivery time slots. The various features of the problem can be described as elements of a digraph and their integration leads to a holistic graph representation. This allows an origin- destination integer multi-commodity flow formulation with non-convex piecewise linear costs, time windows, and side constraints. Column generation algorithms are designed to compute lower bounds. These column generation algorithms are also embedded within heuristics aimed at finding feasible integer solutions. Computational results with real-life data are presented and show the efficacy of the proposed approach."
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
Although the fact that the major target of RHM is the minimization of the risk, there is no universally accepted definition of risk (for a survey on risk assessment, see @cite_89 ). The risk caused by hazmat transportation depends on many factors, the most important of which are: the risk categories (explosion, toxicity, radioactivity, etc), the transportation mode, the affected agents (population, territorial infrastructures and natural elements), the meteorological conditions and the temporal factor. It is pointed out in @cite_48 that the evaluation of risk in hazmat transportation generally consists of the evaluation of the probability of an undesirable event, the exposure level of the population and the environment, and the degree of the consequences (e.g., deaths, injured people, damages). In practice, these probabilities are difficult to obtain due to the lack of data and generally, the analysis is reduced to consider the risk as the expected damage or the population exposure.
|
{
"cite_N": [
"@cite_48",
"@cite_89"
],
"mid": [
"2117638964",
"2046225382"
],
"abstract": [
"We survey research on hazardous materials transportation in the areas of risk analysis, routing scheduling and facility location. Our focus is primarily on work done since 1980, and on research which is methodological rather than empirical. We also limit our focus to transport by land-based vehicles (truck and rail), excluding pipeline, air and maritime movements. The review traces the evolution of models from single-criterion optimizations to multiobjective analyses, and highlights the emerging direction of dealing explicitly with distributions of outcomes, rather than simply optimizing expected values. We also indicate examples of work which integrate risk analysis with routing, and routing with facility location. We conclude with a discussion of several aspects of hazardous materials transportation which offer important challenges for further research.",
"The transport of hazardous materials is an important strategic and tactical decision problem. Risks associated with this activity make transport planning difficult. Although most existing analytical approaches for hazardous materials transport account for risk, there is no agreement among researchers on how to model the associated risks. This paper provides an overview of the prevailing models, and addresses the question \"Does it matter how we quantify transport risk?\" Our empirical analysis on the U.S. road network suggests that different risk models usually select different \"optimal\" paths for a hazmat shipment between a given origin-destination pair. Furthermore, the optimal path for one model could perform very poorly under another model. This suggests that researchers and practitioners must pay considerable attention to the modeling of risks in hazardous materials transport."
]
}
|
1203.1604
|
1574564893
|
The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools.
|
As the risk is a part of the objective function, it is quantified with a path evaluation function @cite_89 . This function is not additive since the probability of a release accident on a link depends on the probability of a release accident on the traveled links of the path. This important property leads to non-linear integer formulations which can not optimized using a classical shortest path algorithm. Generally approximations are needed by considering additive functions (Usually considering independent release accident probabilities on links) for obtaining tractable models.
|
{
"cite_N": [
"@cite_89"
],
"mid": [
"2046225382"
],
"abstract": [
"The transport of hazardous materials is an important strategic and tactical decision problem. Risks associated with this activity make transport planning difficult. Although most existing analytical approaches for hazardous materials transport account for risk, there is no agreement among researchers on how to model the associated risks. This paper provides an overview of the prevailing models, and addresses the question \"Does it matter how we quantify transport risk?\" Our empirical analysis on the U.S. road network suggests that different risk models usually select different \"optimal\" paths for a hazmat shipment between a given origin-destination pair. Furthermore, the optimal path for one model could perform very poorly under another model. This suggests that researchers and practitioners must pay considerable attention to the modeling of risks in hazardous materials transport."
]
}
|
1203.1378
|
2165381624
|
Tracking Twitter for public health has shown great potential. However, most recent work has been focused on correlating Twitter messages to influenza rates, a disease that exhibits a marked seasonal pattern. In the presence of sudden outbreaks, how can social media streams be used to strengthen surveillance capacity? In May 2011, Germany reported an outbreak of Enterohemorrhagic Escherichia coli (EHEC). It was one of the largest described outbreaks of EHEC HUS worldwide and the largest in Germany. In this work, we study the crowd's behavior in Twitter during the outbreak. In particular, we report how tracking Twitter helped to detect key user messages that triggered signal detection alarms before MedISys and other well established early warning systems. We also introduce a personalized learning to rank approach that exploits the relationships discovered by: (i) latent semantic topics computed using Latent Dirichlet Allocation (LDA), and (ii) observing the social tagging behavior in Twitter, to rank tweets for epidemic intelligence. Our results provide the grounds for new public health research based on social media.
|
In order to detect public health events, supervised @cite_25 , unsupervised @cite_16 and rule-based approaches have been used to extract public health events from social media and news. For example, PULS @cite_10 identify the disease, time, location and cases of a news-reported event. It is integrated into MedISys, which automatically collects news articles concerning public health in various languages, and aggregates the extracted facts according to pre-defined categories, in a multi-lingual manner.
|
{
"cite_N": [
"@cite_16",
"@cite_10",
"@cite_25"
],
"mid": [
"117785935",
"",
"1982507619"
],
"abstract": [
"Content analysis and clustering of natural language documents becomes crucial in various domains, even in public health. Recent pandemics such as Swine Flu have caused concern for public health officials. Given the ever increasing pace at which infectious diseases can spread globally, officials must be prepared to react sooner and with greater epidemic intelligence gathering capabilities. Information should be gathered from a broader range of sources, including the Web which in turn requires more robust processing capabilities. To address this limitation, in this paper, we propose a new approach to detect public health events in an unsupervised manner. We address the problems associated with adapting an unsupervised learner to the medical domain and in doing so, propose an approach which combines aspects from different feature-based event detection methods. We evaluate our approach with a real world dataset with respect to the quality of article clusters. Our results show that we are able to achieve a precision of 62 and a recall of 75 evaluated using manually annotated, real-world data.",
"",
"Event-Based Epidemic Intelligence (e-EI) has arisen as a body of work which relies upon different forms of pattern recognition in order to detect the disease reporting events from unstructured text that is present on the Web. Current supervised approaches to e-EI suffer both from high initial and high maintenance costs, due to the need to manually label examples to train and update a classifier for detecting disease reporting events in dynamic information sources, such as blogs. In this paper, we propose a new method for the supervised detection of disease reporting events. We tackle the burden of manually labelling data and address the problems associated with building a supervised learner to classify frequently evolving, and variable blog content. We automatically classify outbreak reports to train a supervised learner, and the knowledge acquired from the learning process is then transferred to the task of classifying blogs. Our experiments show that with the automatic classification of training data, and the transfer approach, we achieve an overall precision of 92 and an accuracy of 78.20 ."
]
}
|
1203.1378
|
2165381624
|
Tracking Twitter for public health has shown great potential. However, most recent work has been focused on correlating Twitter messages to influenza rates, a disease that exhibits a marked seasonal pattern. In the presence of sudden outbreaks, how can social media streams be used to strengthen surveillance capacity? In May 2011, Germany reported an outbreak of Enterohemorrhagic Escherichia coli (EHEC). It was one of the largest described outbreaks of EHEC HUS worldwide and the largest in Germany. In this work, we study the crowd's behavior in Twitter during the outbreak. In particular, we report how tracking Twitter helped to detect key user messages that triggered signal detection alarms before MedISys and other well established early warning systems. We also introduce a personalized learning to rank approach that exploits the relationships discovered by: (i) latent semantic topics computed using Latent Dirichlet Allocation (LDA), and (ii) observing the social tagging behavior in Twitter, to rank tweets for epidemic intelligence. Our results provide the grounds for new public health research based on social media.
|
Monitoring analysis has also been carried out on Twitter. The work of focused on the use of the terms H1N1" and swine flu" during the H1N1 2009 outbreak @cite_6 . They showed that the concise and timely nature of tweets can provide health officials with the a means to become aware, and respond to concerns raised by the public.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2022783018"
],
"abstract": [
"Background Surveys are popular methods to measure public perceptions in emergencies but can be costly and time consuming. We suggest and evaluate a complementary “infoveillance” approach using Twitter during the 2009 H1N1 pandemic. Our study aimed to: 1) monitor the use of the terms “H1N1” versus “swine flu” over time; 2) conduct a content analysis of “tweets”; and 3) validate Twitter as a real-time content, sentiment, and public attention trend-tracking tool. Methodology Principal Findings Between May 1 and December 31, 2009, we archived over 2 million Twitter posts containing keywords “swine flu,” “swineflu,” and or “H1N1.” using Infovigil, an infoveillance system. Tweets using “H1N1” increased from 8.8 to 40.5 (R2 = .788; p<.001), indicating a gradual adoption of World Health Organization-recommended terminology. 5,395 tweets were randomly selected from 9 days, 4 weeks apart and coded using a tri-axial coding scheme. To track tweet content and to test the feasibility of automated coding, we created database queries for keywords and correlated these results with manual coding. Content analysis indicated resource-related posts were most commonly shared (52.6 ). 4.5 of cases were identified as misinformation. News websites were the most popular sources (23.2 ), while government and health agencies were linked only 1.5 of the time. 7 10 automated queries correlated with manual coding. Several Twitter activity peaks coincided with major news stories. Our results correlated well with H1N1 incidence data. Conclusions This study illustrates the potential of using social media to conduct “infodemiology” studies for public health. 2009 H1N1-related tweets were primarily used to disseminate information from credible sources, but were also a source of opinions and experiences. Tweets can be used for real-time content analysis and knowledge translation research, allowing health authorities to respond to public concerns."
]
}
|
1203.1378
|
2165381624
|
Tracking Twitter for public health has shown great potential. However, most recent work has been focused on correlating Twitter messages to influenza rates, a disease that exhibits a marked seasonal pattern. In the presence of sudden outbreaks, how can social media streams be used to strengthen surveillance capacity? In May 2011, Germany reported an outbreak of Enterohemorrhagic Escherichia coli (EHEC). It was one of the largest described outbreaks of EHEC HUS worldwide and the largest in Germany. In this work, we study the crowd's behavior in Twitter during the outbreak. In particular, we report how tracking Twitter helped to detect key user messages that triggered signal detection alarms before MedISys and other well established early warning systems. We also introduce a personalized learning to rank approach that exploits the relationships discovered by: (i) latent semantic topics computed using Latent Dirichlet Allocation (LDA), and (ii) observing the social tagging behavior in Twitter, to rank tweets for epidemic intelligence. Our results provide the grounds for new public health research based on social media.
|
Our work is similar to that of @cite_24 , were media reports on the 2011 EHEC outbreak in Germany are tracked. Although in their work no early warning was possible, they identified key aspects of developing outbreak stories. In contrast to this work, our approach exploits social media data and we show that a system can help to get early warnings on public health threats.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"1450628568"
],
"abstract": [
"In May 2011, an outbreak of enterohemorrhagic Escherichia coli (EHEC) occurred in northern Germany. The Shiga toxin-producing strain O104:H4 infected several thousand people, frequently leading to haemolytic uremic syndrome (HUS) and gastroenteritis (GI). First reports about the outbreak appeared in the German media on Saturday 21st of May 2011; the media attention rose to high levels in the following two weeks, with up to 2000 articles categorized per day by the automatic threat detection system MedISys (Medical Information System). In this article, we illustrate how MedISys detected the sudden increase in reporting on E. coli on 21st of May and how automatic analysis of the reporting provided epidemic intelligence information to follow the event. Categorization, filtering and clustering allowed identifying different aspects within the unfolding news event, analyzing general media and official sites in parallel."
]
}
|
1203.1378
|
2165381624
|
Tracking Twitter for public health has shown great potential. However, most recent work has been focused on correlating Twitter messages to influenza rates, a disease that exhibits a marked seasonal pattern. In the presence of sudden outbreaks, how can social media streams be used to strengthen surveillance capacity? In May 2011, Germany reported an outbreak of Enterohemorrhagic Escherichia coli (EHEC). It was one of the largest described outbreaks of EHEC HUS worldwide and the largest in Germany. In this work, we study the crowd's behavior in Twitter during the outbreak. In particular, we report how tracking Twitter helped to detect key user messages that triggered signal detection alarms before MedISys and other well established early warning systems. We also introduce a personalized learning to rank approach that exploits the relationships discovered by: (i) latent semantic topics computed using Latent Dirichlet Allocation (LDA), and (ii) observing the social tagging behavior in Twitter, to rank tweets for epidemic intelligence. Our results provide the grounds for new public health research based on social media.
|
Although some works exist that address the task of ranking tweets, little effort has been devoted to explore personalized ranking of tweets in the domain of epidemic intelligence. For example rank individual generic tweets according to their relevance to a given query @cite_3 . The features used include content relevance features, Twitter specific features and account authority features. In contrast, our is a personalized learning to rank approach for epidemic intelligence, that exploits an expanded user context by means of latent topics and on social hash-tagging behavior.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"1561650654"
],
"abstract": [
"Twitter, as one of the most popular micro-blogging services, provides large quantities of fresh information including real-time news, comments, conversation, pointless babble and advertisements. Twitter presents tweets in chronological order. Recently, Twitter introduced a new ranking strategy that considers popularity of tweets in terms of number of retweets. This ranking method, however, has not taken into account content relevance or the twitter account. Therefore a large amount of pointless tweets inevitably flood the relevant tweets. This paper proposes a new ranking strategy which uses not only the content relevance of a tweet, but also the account authority and tweet-specific features such as whether a URL link is included in the tweet. We employ learning to rank algorithms to determine the best set of features with a series of experiments. It is demonstrated that whether a tweet contains URL or not, length of tweet and account authority are the best conjunction."
]
}
|
1203.0038
|
2104791085
|
In this letter, we borrow from the inference techniques developed for unbounded state-cardinality (nonparametric) variants of the HMM and use them to develop a tuning-parameter free, black-box inference procedure for explicit-state-duration hidden Markov models (EDHMM). EDHMMs are HMMs that have latent states consisting of both discrete state-indicator and discrete state-duration random variables. In contrast to the implicit geometric state duration distribution possessed by the standard HMM, EDHMMs allow the direct parameterization and estimation of per-state duration distributions. As most duration distributions are defined over the positive integers, truncation or other approximations are usually required to perform EDHMM inference.
|
The need to accommodate explicit state duration distributions in HMMs has long been recognised. Rabiner @cite_13 details the basic approach which expands the state space to include dwell time before applying a slightly modified Baum-Welch algorithm. This approach specifies a maximum state duration, limiting practical application to cases with short sequences and dwell times. This approach, generalised under the name segmental hidden Markov models'', includes more general transitions than those Rabiner considered, allowing the next state and duration to be conditioned on the previous state and duration @cite_0 . Efficient approximate inference procedures were developed in the context of speech recognition @cite_7 , speech synthesis @cite_4 , and evolved into symmetric approaches suitable for practical implementation @cite_8 . Recently, a sticky'' variant of the hierarchical Dirichlet process HMM (HDP-HMM) has been developed @cite_11 . The HDP-HMM has countable state-cardinality @cite_1 allowing estimation of the number of states in the HMM; the sticky aspect addresses long dwell times by introducing a parameter in the prior that favours self-transition.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_13",
"@cite_11"
],
"mid": [
"2106792148",
"2083393647",
"2119357806",
"2158266063",
"2270719270",
"2125838338",
"2135537007"
],
"abstract": [
"A statistical speech synthesis system based on the hidden Markov model (HMM) was recently proposed. In this system, spectrum, excitation, and duration of speech are modeled simultaneously by context-dependent HMMs, and speech parameter vector sequences are generated from the HMMs themselves. This system defines a speech synthesis problem in a generative model framework and solves it based on the maximum likelihood (ML) criterion. However, there is an inconsistency: although state duration probability density functions (PDFs) are explicitly used in the synthesis part of the system, they have not been incorporated into its training part. This inconsistency can make the synthesized speech sound less natural. In this paper, we propose a statistical speech synthesis system based on a hidden semi-Markov model (HSMM), which can be viewed as an HMM with explicit state duration PDFs. The use of HSMMs can solve the above inconsistency because we can incorporate the state duration PDFs explicitly into both the synthesis and the training parts of the system. Subjective listening test results show that use of HSMMs improves the reported naturalness of synthesized speech.",
"Many alternative models have been proposed to address some of the shortcomings of the hidden Markov model (HMM), which is currently the most popular approach to speech recognition. In particular, a variety of models that could be broadly classified as segment models have been described for representing a variable-length sequence of observation vectors in speech recognition applications. Since there are many aspects in common between these approaches, including the general recognition and training problems, it is useful to consider them in a unified framework. The paper describes a general stochastic model that encompasses most of the models proposed in the literature, pointing out similarities of the models in terms of correlation and parameter tying assumptions, and drawing analogies between segment models and HMMs. In addition, we summarize experimental results assessing different modeling assumptions and point out remaining open questions.",
"This correspondence addresses several practical problems in implementing a forward-backward (FB) algorithm for an explicit-duration hidden Markov model. First, the FB variables are redefined in terms of posterior probabilities to avoid possible underflows that may occur in practice. Then, a forward recursion is used that is symmetric to the backward one and can reduce the number of logic gates required to implement on a field-programmable gate-array (FPGA) chip.",
"We consider problems involving groups of data where each observation within a group is a draw from a mixture model and where it is desirable to share mixture components between groups. We assume that the number of mixture components is unknown a priori and is to be inferred from the data. In this setting it is natural to consider sets of Dirichlet processes, one for each group, where the well-known clustering property of the Dirichlet process provides a nonparametric prior for the number of mixture components within each group. Given our desire to tie the mixture models in the various groups, we consider a hierarchical model, specifically one in which the base measure for the child Dirichlet processes is itself distributed according to a Dirichlet process. Such a base measure being discrete, the child Dirichlet processes necessarily share atoms. Thus, as desired, the mixture models in the different groups necessarily share mixture components. We discuss representations of hierarchical Dirichlet processes ...",
"",
"This tutorial provides an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and gives practical details on methods of implementation of the theory along with a description of selected applications of the theory to distinct problems in speech recognition. Results from a number of original sources are combined to provide a single source of acquiring the background required to pursue further this area of research. The author first reviews the theory of discrete Markov chains and shows how the concept of hidden states, where the observation is a probabilistic function of the state, can be used effectively. The theory is illustrated with two simple examples, namely coin-tossing, and the classic balls-in-urns system. Three fundamental problems of HMMs are noted and several practical techniques for solving these problems are given. The various types of HMMs that have been studied, including ergodic as well as left-right models, are described. >",
"The hierarchical Dirichlet process hidden Markov model (HDP-HMM) is a flexible, nonparametric model which allows state spaces of unknown size to be learned from data. We demonstrate some limitations of the original HDP-HMM formulation (, 2006), and propose a sticky extension which allows more robust learning of smoothly varying dynamics. Using DP mixtures, this formulation also allows learning of more complex, multimodal emission distributions. We further develop a sampling algorithm that employs a truncated approximation of the DP to jointly resample the full state sequence, greatly improving mixing rates. Via extensive experiments with synthetic data and the NIST speaker diarization database, we demonstrate the advantages of our sticky extension, and the utility of the HDP-HMM in real-world applications."
]
}
|
1203.0695
|
2950126449
|
We examine the benefits of user cooperation under compute-and-forward. Much like in network coding, receivers in a compute-and-forward network recover finite-field linear combinations of transmitters' messages. Recovery is enabled by linear codes: transmitters map messages to a linear codebook, and receivers attempt to decode the incoming superposition of signals to an integer combination of codewords. However, the achievable computation rates are low if channel gains do not correspond to a suitable linear combination. In response to this challenge, we propose a cooperative approach to compute-and-forward. We devise a lattice-coding approach to block Markov encoding with which we construct a decode-and-forward style computation strategy. Transmitters broadcast lattice codewords, decode each other's messages, and then cooperatively transmit resolution information to aid receivers in decoding the integer combinations. Using our strategy, we show that cooperation offers a significant improvement both in the achievable computation rate and in the diversity-multiplexing tradeoff.
|
Compute-and-forward can be viewed as one of several wireless instantiations of network coding. Network coding was introduced in @cite_55 , where it was shown that network coding achieves the multicast capacity of wireline networks. It was later shown that (random) linear network codes are sufficient for multicast @cite_51 @cite_42 @cite_45 , and although linear codes are provably insufficient for general wireline networks @cite_14 they remain popular due to their simplicity and effectiveness. Network coding has been applied to wireless networks by several means. Two information-theoretic techniques are the quantize-map-and-forward of @cite_20 and the noisy'' network coding of @cite_27 , in which relays compress and re-encode the incoming superposition of signals. These approaches generalize the discrete-valued, noiseless combinations of wireline network coding to continuous-valued, noisy combinations over wireless links. For multicast networks, they come to within a constant gap of capacity. Finally, lattice techniques similar to compute-and-forward have been used for the two-way and multi-way relay channels, again achieving rates within a constant gap of capacity @cite_48 @cite_26 @cite_29 @cite_39 .
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_48",
"@cite_55",
"@cite_42",
"@cite_29",
"@cite_39",
"@cite_27",
"@cite_45",
"@cite_51",
"@cite_20"
],
"mid": [
"2103320637",
"2057929374",
"2952359005",
"",
"2138928022",
"",
"2092253947",
"",
"2048235391",
"2106403318",
""
],
"abstract": [
"It is known that every solvable multicast network has a scalar linear solution over a sufficiently large finite-field alphabet. It is also known that this result does not generalize to arbitrary networks. There are several examples in the literature of solvable networks with no scalar linear solution over any finite field. However, each example has a linear solution for some vector dimension greater than one. It has been conjectured that every solvable network has a linear solution over some finite-field alphabet and some vector dimension. We provide a counterexample to this conjecture. We also show that if a network has no linear solution over any finite field, then it has no linear solution over any finite commutative ring with identity. Our counterexample network has no linear solution even in the more general algebraic context of modules, which includes as special cases all finite rings and Abelian groups. Furthermore, we show that the network coding capacity of this network is strictly greater than the maximum linear coding capacity over any finite field (exactly 10 greater), so the network is not even asymptotically linearly solvable. It follows that, even for more general versions of linearity such as convolutional coding, filter-bank coding, or linear time sharing, the network has no linear solution.",
"In this paper, a Gaussian two-way relay channel, where two source nodes exchange messages with each other through a relay, is considered. We assume that all nodes operate in full-duplex mode and there is no direct channel between the source nodes. We propose an achievable scheme composed of nested lattice codes for the uplink and structured binning for the downlink. Unlike conventional nested lattice codes, our codes utilize two different shaping lattices for source nodes based on a three-stage lattice partition chain, which is a key ingredient for producing the best gap-to-capacity results to date. Specifically, for all channel parameters, the achievable rate region of our scheme is within 1 2 bit from the capacity region for each user and its sum rate is within log3 2 bit from the sum capacity.",
"We consider the problem of two transmitters wishing to exchange information through a relay in the middle. The channels between the transmitters and the relay are assumed to be synchronized, average power constrained additive white Gaussian noise channels with a real input with signal-to-noise ratio (SNR) of snr. An upper bound on the capacity is 1 2 log(1+ snr) bits per transmitter per use of the medium-access phase and broadcast phase of the bi-directional relay channel. We show that using lattice codes and lattice decoding, we can obtain a rate of 1 2 log(0.5 + snr) bits per transmitter, which is essentially optimal at high SNRs. The main idea is to decode the sum of the codewords modulo a lattice at the relay followed by a broadcast phase which performs Slepian-Wolf coding with structured codes. For asymptotically low SNR's, jointly decoding the two transmissions at the relay (MAC channel) is shown to be optimal. We also show that if the two transmitters use identical lattices with minimum angle decoding, we can achieve the same rate of 1 2 log(0.5 + snr). The proposed scheme can be thought of as a joint physical layer, network layer code which outperforms other recently proposed analog network coding schemes.",
"",
"We take a new look at the issue of network capacity. It is shown that network coding is an essential ingredient in achieving the capacity of a network. Building on recent work by (see Proc. 2001 IEEE Int. Symp. Information Theory, p.102), who examined the network capacity of multicast networks, we extend the network coding framework to arbitrary networks and robust networking. For networks which are restricted to using linear network codes, we find necessary and sufficient conditions for the feasibility of any given set of connections over a given network. We also consider the problem of network recovery for nonergodic link failures. For the multicast setup we prove that there exist coding strategies that provide maximally robust networks and that do not require adaptation of the network interior to the failure pattern in question. The results are derived for both delay-free networks and networks with delays.",
"",
"The L-user additive white Gaussian noise multi-way relay channel is considered, where multiple users exchange information through a single relay at a common rate. Existing coding strategies, i.e., complete-decode-forward and compress-forward are shown to be bounded away from the cut-set upper bound at high signal-to-noise ratios (SNR). It is known that the gap between the compress-forward rate and the capacity upper bound is a constant at high SNR, and that between the complete-decode-forward rate and the upper bound increases with SNR at high SNR. In this paper, a functional-decode-forward coding strategy is proposed. It is shown that for L ≥ 3, complete-decode-forward achieves the capacity when SNR ≤ 0 dB, and functional-decode-forward achieves the capacity when SNR ≥ 0 dB. For L = 2, functional-decode-forward achieves the capacity asymptotically as SNR increases.",
"",
"We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks",
"Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.",
""
]
}
|
1203.0695
|
2950126449
|
We examine the benefits of user cooperation under compute-and-forward. Much like in network coding, receivers in a compute-and-forward network recover finite-field linear combinations of transmitters' messages. Recovery is enabled by linear codes: transmitters map messages to a linear codebook, and receivers attempt to decode the incoming superposition of signals to an integer combination of codewords. However, the achievable computation rates are low if channel gains do not correspond to a suitable linear combination. In response to this challenge, we propose a cooperative approach to compute-and-forward. We devise a lattice-coding approach to block Markov encoding with which we construct a decode-and-forward style computation strategy. Transmitters broadcast lattice codewords, decode each other's messages, and then cooperatively transmit resolution information to aid receivers in decoding the integer combinations. Using our strategy, we show that cooperation offers a significant improvement both in the achievable computation rate and in the diversity-multiplexing tradeoff.
|
Lattice codes play a fundamental role in compute-and-forward. Early works on lattice codes @cite_7 @cite_21 @cite_36 showed that they are sufficient to achieve capacity for the point-to-point AWGN channel. The performance of lattice codes under lattice decoding ---in which the receiver quantizes the incoming signal to the nearest lattice point---was studied in @cite_15 , and it was shown in @cite_10 that lattice decoding achieves capacity. In addition to compute-and-forward, lattice codes have seen use in a variety of information-theoretic problems, including source coding @cite_4 @cite_37 @cite_32 , physical-layer security @cite_23 @cite_57 @cite_41 , and relay networks @cite_25 @cite_54 @cite_8 @cite_28 .
|
{
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_41",
"@cite_54",
"@cite_21",
"@cite_28",
"@cite_32",
"@cite_57",
"@cite_23",
"@cite_15",
"@cite_10",
"@cite_25"
],
"mid": [
"2109053700",
"2111992817",
"2156360762",
"2101872481",
"2134932540",
"2078825211",
"2953081370",
"1970046173",
"",
"2124317079",
"2952191654",
"1640197322",
"2151450116",
"",
"1505858209"
],
"abstract": [
"Consider a pair of correlated Gaussian sources (X 1,X 2). Two separate encoders observe the two components and communicate compressed versions of their observations to a common decoder. The decoder is interested in reconstructing a linear combination of X 1 and X 2 to within a mean-square distortion of D. We obtain an inner bound to the optimal rate-distortion region for this problem. A portion of this inner bound is achieved by a scheme that reconstructs the linear function directly rather than reconstructing the individual components X 1 and X 2 first. This results in a better rate region for certain parameter values. Our coding scheme relies on lattice coding techniques in contrast to more prevalent random coding arguments used to demonstrate achievable rate regions in information theory. We then consider the case of linear reconstruction of K sources and provide an inner bound to the optimal rate-distortion region. Some parts of the inner bound are achieved using the following coding structure: lattice vector quantization followed by ldquocorrelatedrdquo lattice-structured binning.",
"Network information theory promises high gains over simple point-to-point communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner (1974, 1978) and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, previous work proposed the idea of nested codes, or more specifically, nested parity-check codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach.",
"The techniques of the geometry of numbers, especially the Minkowski-Hlawka theorem, are used to modify Shannon's existence proof for optimal channel codes, so that the modified proof applies specifically to lattice codes. The resulting existence proof states that there exist lattice codes which satisfy Shannon's bound to within the factor 4, and hence match the reliability exponent and critical rate bounds which Shannon derived for optimal codes with unspecified structure. Therefore, it is demonstrated that optimal codes need not be random, but rather that some of them have structure, e.g. the structure of a lattice code. >",
"It has been conjectured that lattice codes are good for (almost) everything. As an additional bit of evidence for this claim, we offer a few results showing the utility of lattice codes for the AWGN relay channel. We show that the decode-and-forward rates of the relay channel can be achieved using lattice encoding and decoding. We present an encoding decoding technique that uses a doubly-nested lattice code. Encoding is accomplished using a combination of superposition encoding and block Markov encoding, while decoding is accomplished using a strategy reminiscent of Cover and El Gamal's list decoding. Our technique can be extended to a wide variety of relay topologies, including the half-duplex relay channel and the cooperative multiple-access channel.",
"It is shown that lattice codes can achieve capacity on the additive white Gaussian noise channel. More precisely, for any rate R less than capacity and e>0, there exists a lattice code with rate no less than R and average error probability upper-bounded by e. These lattice codes include all points of the (translated) lattice within the spherical bounding region (not just the ones inside a thin spherical shell).",
"We propose the notion of secrecy gain as a code design criterion for wiretap lattice codes to be used over an additive white Gaussian noise channel. Our analysis relies on the error probabilites of both the legitimate user and the eavesdropper. We focus on geometrical properties of lattices, described by their theta series, to characterize good wiretap codes.",
"Recently, it has been shown that a quantize-map-and-forward scheme approximately achieves (within a constant number of bits) the Gaussian relay network capacity for arbitrary topologies. This was established using Gaussian codebooks for transmission and random mappings at the relays. In this paper, we show that the same approximation result can be established by using lattices for transmission and quantization along with structured mappings at the relays.",
"An error in de Buda's (see IEEE J. Select Areas Commun., vol. 7, no. 6, p. 893, 1989) proof on the asymptotic optimality of lattice channel codes is pointed out and corrected using a modification of de Buda's approach. Comments are given on the correct interpretation and the limitations of this result. >",
"",
"We consider distributed compression of a pair of Gaussian sources in which the goal is to reproduce a linear function of the sources at the decoder. It has recently been noted that lattice codes can provide improved compression rates for this problem compared to conventional, unstructured codes. We show that by including an additional linear binning stage, the state-of-the-art lattice scheme can be improved, in some cases by an arbitrarily large factor. We then describe a lower bound on the optimal sum rate for the case in which the variance of the linear combination exceeds the variance of one of the sources. This lower bound shows that unstructured codes achieve within one bit of the optimal sum rate at any distortion level. We also describe an outer bound on the rate-distortion region that holds in general, which for the special case of communicating the difference of two positively correlated Gaussian sources shows that the unimproved lattice scheme is within one bit of the rate region at any distortion level.",
"This paper shows that structured transmission schemes are a good choice for secret communication over interference networks with an eavesdropper. Structured transmission is shown to exploit channel asymmetries and thus perform better than randomly generated codebooks for such channels. For a class of interference channels, we show that an equivocation sumrate that is within two bits of the maximum possible legitimate communication sum-rate is achievable using lattice codes.",
"Recent results have shown that structured codes can be used to construct good channel codes, source codes and physical layer network codes for Gaussian channels. For Gaussian channels with secrecy constraints, however, efforts to date rely on random codes. In this work, we advocate that structured codes are useful for providing secrecy, and show how to compute the secrecy rate when structured codes are used. In particular, we solve the problem of bounding equivocation rates with one important class of structured codes, i.e., nested lattice codes. Having established this result, we next demonstrate the use of structured codes for secrecy in two-user Gaussian channels. In particular, with structured codes, we prove that a positive secure degree of freedom is achievable for a large class of fully connected Gaussian channels as long as the channel is not degraded. By way of this, for these channels, we establish that structured codes outperform Gaussian random codes at high SNR. This class of channels include the two-user multiple access wiretap channel, the two-user interference channel with confidential messages and the two-user interference wiretap channel. A notable consequence of this result is that, unlike the case with Gaussian random codes, using structured codes for both transmission and cooperative jamming, it is possible to achieve an arbitrary large secrecy rate given enough power.",
"General random coding theorems for lattices are derived from the Minkowski-Hlawka theorem and their close relation to standard averaging arguments for linear codes over finite fields is pointed out. A new version of the Minkowski-Hlawka theorem itself is obtained as the limit, for p spl rarr spl infin , of a simple lemma for linear codes over GF(p) used with p-level amplitude modulation. The relation between the combinatorial packing of solid bodies and the information-theoretic \"soft packing\" with arbitrarily small, but positive, overlap is illuminated. The \"soft-packing\" results are new. When specialized to the additive white Gaussian noise channel, they reduce to (a version of) the de Buda-Poltyrev result that spherically shaped lattice codes and a decoder that is unaware of the shaping can achieve the rate 1 2 log sub 2 (P N).",
"",
"In this paper, we consider a class of single-source multicast relay networks. We assume that all outgoing channels of a node in the network to its neighbors are orthogonal while the incoming signals from its neighbors can interfere with each other. We first focus on Gaussian relay networks with interference and find an achievable rate using a lattice coding scheme. We show that the achievable rate of our scheme is within a constant bit gap from the information theoretic cut-set bound, where the constant depends only on the network topology, but not on the transmit power, noise variance, and channel gains. This is similar to a recent result by Avestimehr, Diggavi, and Tse, who showed an approximate capacity characterization for general Gaussian relay networks. However, our achievability uses a structured code instead of a random one. Using the idea used in the Gaussian case, we also consider a linear finite-field symmetric network with interference and characterize its capacity using a linear coding scheme."
]
}
|
1203.0695
|
2950126449
|
We examine the benefits of user cooperation under compute-and-forward. Much like in network coding, receivers in a compute-and-forward network recover finite-field linear combinations of transmitters' messages. Recovery is enabled by linear codes: transmitters map messages to a linear codebook, and receivers attempt to decode the incoming superposition of signals to an integer combination of codewords. However, the achievable computation rates are low if channel gains do not correspond to a suitable linear combination. In response to this challenge, we propose a cooperative approach to compute-and-forward. We devise a lattice-coding approach to block Markov encoding with which we construct a decode-and-forward style computation strategy. Transmitters broadcast lattice codewords, decode each other's messages, and then cooperatively transmit resolution information to aid receivers in decoding the integer combinations. Using our strategy, we show that cooperation offers a significant improvement both in the achievable computation rate and in the diversity-multiplexing tradeoff.
|
Finally, our approach relies heavily on the field of user cooperation. Cooperation was first introduced with the relay channel in @cite_52 . In @cite_34 the relay channel is given a thorough treatment, and the most popular relaying strategies---now known as decode-and-forward and compress-and-forward---are presented. More recent work has focused on the diversity gains of cooperation @cite_49 @cite_43 @cite_53 @cite_40 @cite_9 @cite_12 , showing that cooperating transmitters can obtain diversity gains similar to that of multiple-antenna systems.
|
{
"cite_N": [
"@cite_53",
"@cite_9",
"@cite_52",
"@cite_43",
"@cite_40",
"@cite_49",
"@cite_34",
"@cite_12"
],
"mid": [
"2026898705",
"2099857870",
"",
"",
"",
"2148963518",
"2167447263",
"2006462346"
],
"abstract": [
"We develop and analyze space-time coded cooperative diversity protocols for combating multipath fading across multiple protocol layers in a wireless network. The protocols exploit spatial diversity available among a collection of distributed terminals that relay messages for one another in such a manner that the destination terminal can average the fading, even though it is unknown a priori which terminals will be involved. In particular, a source initiates transmission to its destination, and many relays potentially receive the transmission. Those terminals that can fully decode the transmission utilize a space-time code to cooperatively relay to the destination. We demonstrate that these protocols achieve full spatial diversity in the number of cooperating terminals, not just the number of decoding relays, and can be used effectively for higher spectral efficiencies than repetition-based schemes. We discuss issues related to space-time code design for these protocols, emphasizing codes that readily allow for appealing distributed versions.",
"We consider a general multiple-antenna network with multiple sources, multiple destinations, and multiple relays in terms of the diversity-multiplexing tradeoff (DMT). We examine several subcases of this most general problem taking into account the processing capability of the relays (half-duplex or full-duplex), and the network geometry (clustered or nonclustered). We first study the multiple-antenna relay channel with a full-duplex relay to understand the effect of increased degrees of freedom in the direct link. We find DMT upper bounds and investigate the achievable performance of decode-and-forward (DF), and compress-and-forward (CF) protocols. Our results suggest that while DF is DMT optimal when all terminals have one antenna each, it may not maintain its good performance when the degrees of freedom in the direct link are increased, whereas CF continues to perform optimally. We also study the multiple-antenna relay channel with a half-duplex relay. We show that the half-duplex DMT behavior can significantly be different from the full-duplex case. We find that CF is DMT optimal for half-duplex relaying as well, and is the first protocol known to achieve the half-duplex relay DMT. We next study the multiple-access relay channel (MARC) DMT. Finally, we investigate a system with a single source-destination pair and multiple relays, each node with a single antenna, and show that even under the ideal assumption of full-duplex relays and a clustered network, this virtual multiple-input multiple-output (MIMO) system can never fully mimic a real MIMO DMT. For cooperative systems with multiple sources and multiple destinations the same limitation remains in effect.",
"",
"",
"",
"Mobile users' data rate and quality of service are limited by the fact that, within the duration of any given call, they experience severe variations in signal attenuation, thereby necessitating the use of some type of diversity. In this two-part paper, we propose a new form of spatial diversity, in which diversity gains are achieved via the cooperation of mobile users. Part I describes the user cooperation strategy, while Part II (see ibid., p.1939-48) focuses on implementation issues and performance analysis. Results show that, even though the interuser channel is noisy, cooperation leads not only to an increase in capacity for both users but also to a more robust system, where users' achievable rates are less susceptible to channel variations.",
"A relay channel consists of an input x_ l , a relay output y_ 1 , a channel output y , and a relay sender x_ 2 (whose transmission is allowed to depend on the past symbols y_ 1 . The dependence of the received symbols upon the inputs is given by p(y,y_ 1 |x_ 1 ,x_ 2 ) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_ 1 , then C : = : !_ p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y), I(X_ 1 ; Y_ 1 |X_ 2 ) . 2)If y_ 1 is a degraded form of y , then C : = : !_ p(x_ 1 ) x_ 2 I(X_ 1 ;Y|x_ 2 ) . 3)If p(y,y_ 1 |x_ 1 ,x_ 2 ) is an arbitrary relay channel with feedback from (y,y_ 1 ) to both x_ 1 x_ 2 , then C : = : p(x_ 1 ,x_ 2 ) , I(X_ 1 ,X_ 2 ;Y),I ,(X_ 1 ;Y,Y_ 1 |X_ 2 ) . 4)For a general relay channel, C : : p(x_ 1 ,x_ 2 ) , I ,(X_ 1 , X_ 2 ;Y),I(X_ 1 ;Y,Y_ 1 |X_ 2 ) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.",
"As one of the key technologies for future wireless communication systems, user cooperation has a great potential in improving the system performance. In this paper, the diversity benefit of user cooperation is evaluated using diversity-multiplexing tradeoff (DMT) as the performance metric. Ways of exploiting the diversity benefit are devised. We investigate two typical channel models, namely, the multiple-access channel (MAC) and the interference channel (IFC). For the MAC, all the terminals are equipped with multiple antennas and the K(K ≥ 2) sources cooperate in a full-duplex mode. We determine the optimal DMT of the channel and propose a compressed-and-forward (CF) based cooperation scheme to achieve the optimal DMT. For the IFC, we consider a two-user channel with single-antenna terminals and full-duplex source cooperation. A DMT upper bound and two DMT lower bounds are derived. The upper bound is derived based on a channel capacity upper bound, and the lower bounds are derived using decode-and-forward (DF) based cooperation schemes. In most channel scenarios, the bounds are found to be tight. This implies that under certain channel scenarios, the optimal DMT of the channel can be achieved by DF-based cooperation schemes."
]
}
|
1203.0353
|
2949136877
|
We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) for participating in the survey. Individuals must be compensated for their participation and are strategic agents, and so the payment scheme must incentivize truthful behavior. We derive optimal truthful mechanisms for this problem for the two goals of minimizing the variance of the estimator given a fixed budget, and minimizing the expected cost of the survey given a fixed variance goal.
|
Recently, the problem of designing truthful mechanisms to estimate statistics from a population which explicitly experiences costs for privacy loss was introduced by Ghosh and Roth @cite_4 . Subsequently (and concurrently with this work), Ligett and Roth @cite_0 extend this work to sequences of take-it-or-leave-it offers. Although it has similar goals, this line of work differs from the current paper in that it measures cost using the formalism of differential privacy, and more importantly, takes a worst-case view and does not assume a known prior over agent costs. In contrast, here we use a prior over agent costs to derive mechanisms. This paper can be viewed as answering an open question posed by @cite_4 , which asked whether the approach of Bayesian-optimal mechanism design could be brought to bear on the data gathering problem when the distribution over agent costs was known.
|
{
"cite_N": [
"@cite_0",
"@cite_4"
],
"mid": [
"2950731906",
"2157497706"
],
"abstract": [
"In this paper, we consider the problem of estimating a potentially sensitive (individually stigmatizing) statistic on a population. In our model, individuals are concerned about their privacy, and experience some cost as a function of their privacy loss. Nevertheless, they would be willing to participate in the survey if they were compensated for their privacy cost. These cost functions are not publicly known, however, nor do we make Bayesian assumptions about their form or distribution. Individuals are rational and will misreport their costs for privacy if doing so is in their best interest. Ghosh and Roth recently showed in this setting, when costs for privacy loss may be correlated with private types, if individuals value differential privacy, no individually rational direct revelation mechanism can compute any non-trivial estimate of the population statistic. In this paper, we circumvent this impossibility result by proposing a modified notion of how individuals experience cost as a function of their privacy loss, and by giving a mechanism which does not operate by direct revelation. Instead, our mechanism has the ability to randomly approach individuals from a population and offer them a take-it-or-leave-it offer. This is intended to model the abilities of a surveyor who may stand on a street corner and approach passers-by.",
"We initiate the study of markets for private data, through the lens of differential privacy. Although the purchase and sale of private data has already begun on a large scale, a theory of privacy as a commodity is missing. In this paper, we propose to build such a theory. Specifically, we consider a setting in which a data analyst wishes to buy information from a population from which he can estimate some statistic. The analyst wishes to obtain an accurate estimate cheaply, while the owners of the private data experience some cost for their loss of privacy, and must be compensated for this loss. Agents are selfish, and wish to maximize their profit, so our goal is to design truthful mechanisms. Our main result is that such problems can naturally be viewed and optimally solved as variants of multi-unit procurement auctions. Based on this result, we derive auctions which are optimal up to small constant factors for two natural settings: When the data analyst has a fixed accuracy goal, we show that an application of the classic Vickrey auction achieves the analyst's accuracy goal while minimizing his total payment. When the data analyst has a fixed budget, we give a mechanism which maximizes the accuracy of the resulting estimate while guaranteeing that the resulting sum payments do not exceed the analyst's budget. In both cases, our comparison class is the set of envy-free mechanisms, which correspond to the natural class of fixed-price mechanisms in our setting. In both of these results, we ignore the privacy cost due to possible correlations between an individual's private data and his valuation for privacy itself. We then show that generically, no individually rational mechanism can compensate individuals for the privacy loss incurred due to their reported valuations for privacy. This is nevertheless an important issue, and modeling it correctly is one of the many exciting directions for future work."
]
}
|
1203.0631
|
1825902323
|
A Boolean function is called read-once over a basis B if it can be expressed by a formula over B where no variable appears more than once. A checking test for a read-once function f over B depending on all its variables is a set of input vectors distinguishing f from all other read-once functions of the same variables. We show that every read-once function f over B has a checking test containing O(n^l) vectors, where n is the number of relevant variables of f and l is the largest arity of functions in B. For some functions, this bound cannot be improved by more than a constant factor. The employed technique involves reconstructing f from its l-variable projections and provides a stronger form of Kuznetsov's classic theorem on read-once representations.
|
It follows from our results that for any finite basis @math , the problem of learning an unknown read-once function over @math can be solved by an algorithm making @math membership and subcube identity queries, which is polynomial in @math (here @math is the largest arity of functions in @math and a standard membership query is simply a request for the value of @math on a given input vector). This result builds upon an algorithm by Bshouty, Hancock and Hellerstein @cite_7 , which is a strong generalization of a classic exact identification algorithm by Angluin, Hellerstein and Karpinski @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_7"
],
"mid": [
"2071210909",
"2003523975"
],
"abstract": [
"A read-once formula is a Boolean formula in which each variable occurs, at most, once. Such formulas are also called μ-formulas or Boolean trees. This paper treats the problem of exactly identifying an unknown read-once formula using specific kinds of queries. The main results are a polynomial-time algorithm for exact identification of monotone read-once formulas using only membership queries, and a polynomial-time algorithm for exact identification of general read-once formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). The results of the authors improve on Valiant's previous results for read-once formulas [26]. It is also shown, that no polynomial-time algorithm using only membership queries or only equivalence queries can exactly identify all read-once formulas.",
"A formula is read-once if each variable appears on at most a single input. Previously, Angluin, Hellerstein, and Karpinski gave a polynomial time algorithm hat uses membership and equivalence queries to identify exactly read once boolean formulas over the basis AND, OR, NOT . In this paper we consider natural generalizations of this basis, and develop exact identification algorithms for more powerful classes of read-once formulas. We show that read-once formulas over the basis of arbitrary boolean functions of constant fan-in L (i.e., any ?: 0,1 1 ? c ? k ? 0,1 , where k is a constant) are exactly identifiable i polynomial time using membership and equivalence queries. We also show that read-once formulas over the basis of arbitrary symmetric boolean functions are exactly identifiable in polynomial time in this model. Given standard cryptographic assumptions, there is no polynomial time identification algorithm for read-twice formulas over either of these bases in the model. We further show that for any basis class B meeting certain technical conditions, any polynomial time identification algorithm for read-once formulas over B can be extended to a polynomial time identification algorithm for read-once formulas over the union of B and the arbitrary functions of constant fan-in. As a result, read-once formulas over the union of arbitrary symmetric and arbitrary constant fan-in gates are also exactly identifiable in polynomial time using membership and equivalence queries."
]
}
|
1203.0631
|
1825902323
|
A Boolean function is called read-once over a basis B if it can be expressed by a formula over B where no variable appears more than once. A checking test for a read-once function f over B depending on all its variables is a set of input vectors distinguishing f from all other read-once functions of the same variables. We show that every read-once function f over B has a checking test containing O(n^l) vectors, where n is the number of relevant variables of f and l is the largest arity of functions in B. For some functions, this bound cannot be improved by more than a constant factor. The employed technique involves reconstructing f from its l-variable projections and provides a stronger form of Kuznetsov's classic theorem on read-once representations.
|
Closely related to the notion of checking test complexity is the definition of teaching dimension introduced by Goldman and Kearns @cite_3 . A teaching sequence for a Boolean concept (a Boolean function @math ) in a known class @math is a sequence of labeled instances (pairs of the form @math ) consistent with only one function @math from @math . The teaching dimension of a class is the smallest number @math such that all concepts in the class have teaching sequences of length at most @math .
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2020764470"
],
"abstract": [
"While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching. In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension, Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class."
]
}
|
1203.0631
|
1825902323
|
A Boolean function is called read-once over a basis B if it can be expressed by a formula over B where no variable appears more than once. A checking test for a read-once function f over B depending on all its variables is a set of input vectors distinguishing f from all other read-once functions of the same variables. We show that every read-once function f over B has a checking test containing O(n^l) vectors, where n is the number of relevant variables of f and l is the largest arity of functions in B. For some functions, this bound cannot be improved by more than a constant factor. The employed technique involves reconstructing f from its l-variable projections and provides a stronger form of Kuznetsov's classic theorem on read-once representations.
|
Another appealing problem is that of obtaining bounds on the value of @math for individual read-once functions @math . The bound @math is obtained in @cite_10 and generalized in @cite_13 . @cite_2 , it is shown that for a wide class of bases including @math , @math , there exist pairs of read-once functions @math such that @math is obtained from @math by substituting a constant for a variable and @math . This result shows that lower bounds on @math cannot generally be obtained by simply finding projections of @math that are already known to require a large number of vectors in their checking tests.
|
{
"cite_N": [
"@cite_13",
"@cite_10",
"@cite_2"
],
"mid": [
"2146687214",
"1975136501",
"1990119159"
],
"abstract": [
"We estimate the order for the length of checking test for disjunction of n variables considered as a read-once function in an arbitrary basis.",
"Two classical problems are considered: recognizing the properties of a Boolean function given a column of its values and constructing a diagnostic test. The problems are investigated for nonrepeating functions in an arbitrary basis B. For the first problem, the decomposition method is applied to prove linear complexity of the corresponding sequential circuits; for the second problem we derive the order of the Shannon functions for a number of bases, in particular, for the basis of all functions of four variables.",
"An effect of an increase in minimum test length for functions under constant substitutions of constants instead of variables in a checking test problem for read-once functions is described. A family of bases is described, and sequences of functions that are read-once in these bases and have projections whose testing requires more vectors than these functions themselves are constructed."
]
}
|
1202.5675
|
2949838659
|
We introduce the following notion of compressing an undirected graph G with edge-lengths and terminal vertices @math . A distance-preserving minor is a minor G' (of G) with possibly different edge-lengths, such that @math and the shortest-path distance between every pair of terminals is exactly the same in G and in G'. What is the smallest f*(k) such that every graph G with k=|R| terminals admits a distance-preserving minor G' with at most f*(k) vertices? Simple analysis shows that @math . Our main result proves that @math , significantly improving over the trivial @math . Our lower bound holds even for planar graphs G, in contrast to graphs G of constant treewidth, for which we prove that O(k) vertices suffice.
|
Coppersmith and Elkin @cite_11 studied a problem similar to ours, except that they seek subgraphs with few edges (rather than minors). Among other things, they prove that for every weighted graph @math and every set of @math terminals (sources), there exists a weighted subgraph @math , called , that preserves terminal distances exactly and has @math edges. They also show a nearly-matching lower bound on @math . Dor, Halperin and Zwick @cite_9 similarly asked for a graph with few edges, though not necessarily a subgraph or a minor, that preserves all distances. Woodruff @cite_5 combined their notion of with Coppersmith and Elkin's , and studied the size of arbitrary graphs preserving only distances between given sets of terminals (sources) in the given graph @math .
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_11"
],
"mid": [
"1972920042",
"2156047991",
"2031536548"
],
"abstract": [
"An additive spanner of an unweighted undirected graph G with distortion d is a subgraph H such that for any two vertices u, v G, we have _H ( u,v ) = _G ( u,v ) + d. For every k = O ( n n ), we construct a graph G on n vertices for which any additive spanner of G with distortion 2k - 1 has ( 1 k n^ 1 + 1 k ) edges. This matches the lower bound previously known only to hold under a 1963 conjecture of Erdos. We generalize our lower bound in a number of ways. First, we consider graph emulators introduced by Dor, Halperin, and Zwick (FOCS, 1996), where an emulator of an unweighted undirected graph G with distortion d is like an additive spanner except H may be an arbitrary weighted graph such that _G ( u,v ) _H ( u,v ) _G ( u,v ) + d. We show a lower bound of ( 1 k^2 n^ 1 + 1 k ) edges for distortion-(2k - 1) emulators. These are the first non-trivial bounds for k 3. Second, we parameterize our bounds in terms of the minimum degree of the graph. Namely, for minimum degree n^ 1 k+c for any c 0, we prove a bound of ( 1 k n^ 1 + 1 k - c(1 + 2 (k - 1)) ) for additive spanners and ( 1 k^2 n^ 1 + 1 k - c(1 + 2 (k - 1)) ) for emulators. For k = 2 these can be improved to ( n^ 3 2 - c ). This partially answers a question of (SODA, 2005) for additive spanners. Finally, we continue the study of pair-wise and source-wise distance preservers defined by Coppersmith and Elkin (SODA, 2005) by considering their approximate variants and their relaxation to emulators. We prove the first lower bounds for such graphs.",
"Let G=(V,E) be an unweighted undirected graph on n vertices. A simple argument shows that computing all distances in G with an additive one-sided error of at most 1 is as hard as Boolean matrix multiplication. Building on recent work of [SIAM J. Comput., 28 (1999), pp. 1167--1181], we describe an @math -time algorithm APASP2 for computing all distances in G with an additive one-sided error of at most 2. Algorithm APASP2 is simple, easy to implement, and faster than the fastest known matrix-multiplication algorithm. Furthermore, for every even k>2, we describe an @math -time algorithm APASPk for computing all distances in G with an additive one-sided error of at most k. We also give an @math -time algorithm @math for producing stretch 3 estimated distances in an unweighted and undirected graph on n vertices. No constant stretch factor was previously achieved in @math time. We say that a weighted graph F=(V,E') k-emulates an unweighted graph G=(V,E) if for every @math we have @math . We show that every unweighted graph on n vertices has a 2-emulator with @math edges and a 4-emulator with @math edges. These results are asymptotically tight. Finally, we show that any weighted undirected graph on n vertices has a 3-spanner with @math edges and that such a 3-spanner can be built in @math time. We also describe an @math -time algorithm for estimating all distances in a weighted undirected graph on n vertices with a stretch factor of at most 3.",
"We introduce and study the notions of pairwise and sourcewise preservers. Given an undirected N-vertex graph G = (V,E) and a set P of pairs of vertices, let G' = (V,H), H E, be called a pairwise preserver of G with respect to P if for every pair u,w P, distG'(u,w) = distG(u,w). For a set S V of sources, a pairwise preserver of G with respect to the set of all pairs P = (S 2) of sources is called a sourcewise preserver of G with respect to S. We prove that for every undirected possibly weighted N-vertex graph G and every set P of P = O(N1 2) pairs of vertices of G, there exists a linear-size pairwise preserver of G with respect to P. Consequently, for every subset S V of S = O(N1 4) sources, there exists a linear-size sourcewise preserver of G with respect to S. On the negative side we show that neither of the two exponents (1 2 and 1 4) can be improved even when the attention is restricted to unweighted graphs. Our lower bounds involve constructions of dense convexly independent sets of vectors with small Euclidean norms. We believe that the link between the areas of discrete geometry and spanners that we establish is of independent interest and might be useful in the study of other problems in the area of low-distortion embeddings."
]
}
|
1202.5675
|
2949838659
|
We introduce the following notion of compressing an undirected graph G with edge-lengths and terminal vertices @math . A distance-preserving minor is a minor G' (of G) with possibly different edge-lengths, such that @math and the shortest-path distance between every pair of terminals is exactly the same in G and in G'. What is the smallest f*(k) such that every graph G with k=|R| terminals admits a distance-preserving minor G' with at most f*(k) vertices? Simple analysis shows that @math . Our main result proves that @math , significantly improving over the trivial @math . Our lower bound holds even for planar graphs G, in contrast to graphs G of constant treewidth, for which we prove that O(k) vertices suffice.
|
The relevant information (features) in a graph can also be maintained by a data structure that is not necessarily graphs. A notable example is Distance Oracles -- low-space data structures that can answer distance queries (often approximately) in constant time @cite_2 . These structures adhere to our main requirement of compression'' and are designed to answer queries very quickly. However, they might lose properties that are natural in graphs, such as the triangle inequality or the similarity of a minor to the given graph, which may be useful for further processing of the graph.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2045446569"
],
"abstract": [
"Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs."
]
}
|
1202.4805
|
1530957817
|
A key challenge within the social network literature is the problem of network generation - that is, how can we create synthetic networks that match characteristics traditionally found in most real world networks? Important characteristics that are present in social networks include a power law degree distribution, small diameter and large amounts of clustering; however, most current network generators, such as the Chung Lu and Kronecker models, largely ignore the clustering present in a graph and choose to focus on preserving other network statistics, such as the power law distribution. Models such as the exponential random graph model have a transitivity parameter, but are computationally difficult to learn, making scaling to large real world networks intractable. In this work, we propose an extension to the Chung Lu ran- dom graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion of a random transitive edge. That is, with some probability it will choose to connect to a node exactly two hops away, having been introduced to a 'friend of a friend'. In all other cases it will follow the standard Chung Lu model, selecting a 'random surfer' from anywhere in the graph according to the given invariant distribution. We prove TCL's expected degree distribution is equal to the degree distribution of the original graph, while being able to capture the clustering present in the network. The single parameter required by our model can be learned in seconds on graphs with millions of edges, while networks can be generated in time that is linear in the number of edges. We demonstrate the performance TCL on four real- world social networks, including an email dataset with hundreds of thousands of nodes and millions of edges, showing TCL generates graphs that match the degree distribution, clustering coefficients and hop plots of the original networks.
|
Recently there has been a great deal of work focused on the development of generative models for small world and scale-free graphs (e.g., @cite_16 @cite_9 @cite_13 @cite_6 @cite_0 @cite_5 @cite_12 ). As an example, the Chung Lu model is able to generate a network which has a provable expected degree distribution equal to the degree distribution of the original graph. The CL model, like many, attempts to define a process which matches a subset of features observed in a network.
|
{
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_12"
],
"mid": [
"2112090702",
"2115579680",
"2076844992",
"2112681514",
"",
"2008620264",
"2027377866"
],
"abstract": [
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"The Web may be viewed as a directed graph each of whose vertices is a static HTML Web page, and each of whose edges corresponds to a hyperlink from one Web page to another. We propose and analyze random graph models inspired by a series of empirical observations on the Web. Our graph models differ from the traditional G sub n,p models in two ways: 1. Independently chosen edges do not result in the statistics (degree distributions, clique multitudes) observed on the Web. Thus, edges in our model are statistically dependent on each other. 2. Our model introduces new vertices in the graph as time evolves. This captures the fact that the Web is changing with time. Our results are two fold: we show that graphs generated using our model exhibit the statistics observed on the Web graph, and additionally, that natural graph models proposed earlier do not exhibit them. This remains true even when these earlier models are generalized to account for the arrival of vertices over time. In particular, the sparse random graphs in our models exhibit properties that do not arise in far denser random graphs generated by Erdos-Renyi models.",
"Spanning nearly sixty years of research, statistical network analysis has passed through (at least) two generations of researchers and models. Beginning in the late 1930's, the first generation of research dealt with the distribution of various network statistics, under a variety of null models. The second generation, beginning in the 1970's and continuing into the 1980's, concerned models, usually for probabilities of relational ties among very small subsets of actors, in which various simple substantive tendencies were parameterized. Much of this research, most of which utilized log linear models, first appeared in applied statistics publications.",
"How can we generate realistic networks? In addition, how can we do so with a mathematically tractable model that allows for rigorous analysis of network properties? Real networks exhibit a long list of surprising properties: Heavy tails for the in- and out-degree distribution, heavy tails for the eigenvalues and eigenvectors, small diameters, and densification and shrinking diameters over time. Current network models and generators either fail to match several of the above properties, are complicated to analyze mathematically, or both. Here we propose a generative model for networks that is both mathematically tractable and can generate networks that have all the above mentioned structural properties. Our main idea here is to use a non-standard matrix operation, the Kronecker product, to generate graphs which we refer to as \"Kronecker graphs\". First, we show that Kronecker graphs naturally obey common network properties. In fact, we rigorously prove that they do so. We also provide empirical evidence showing that Kronecker graphs can effectively model the structure of real networks. We then present KRONFIT, a fast and scalable algorithm for fitting the Kronecker graph generation model to large real networks. A naive approach to fitting would take super-exponential time. In contrast, KRONFIT takes linear time, by exploiting the structure of Kronecker matrix multiplication and by using statistical simulation techniques. Experiments on a wide range of large real and synthetic networks show that KRONFIT finds accurate parameters that very well mimic the properties of target networks. In fact, using just four parameters we can accurately model several aspects of global network structure. Once fitted, the model parameters can be used to gain insights about the network structure, and the resulting synthetic graphs can be used for null-models, anonymization, extrapolations, and graph summarization.",
"",
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.",
"Abstract Random graph theory is used to examine the “small-world phenomenon”; any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees the average distance is almost surely of order log n log d, where d is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1 kβ for some fixed exponent β. For the case of β > 3, we prove that the average distance of the power law graphs is almost surely of order log n log d. However, many Internet, social, and citation networks are power law graphs with exponents in the range 2 < β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, which we call the core, having nc log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core."
]
}
|
1202.4805
|
1530957817
|
A key challenge within the social network literature is the problem of network generation - that is, how can we create synthetic networks that match characteristics traditionally found in most real world networks? Important characteristics that are present in social networks include a power law degree distribution, small diameter and large amounts of clustering; however, most current network generators, such as the Chung Lu and Kronecker models, largely ignore the clustering present in a graph and choose to focus on preserving other network statistics, such as the power law distribution. Models such as the exponential random graph model have a transitivity parameter, but are computationally difficult to learn, making scaling to large real world networks intractable. In this work, we propose an extension to the Chung Lu ran- dom graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion of a random transitive edge. That is, with some probability it will choose to connect to a node exactly two hops away, having been introduced to a 'friend of a friend'. In all other cases it will follow the standard Chung Lu model, selecting a 'random surfer' from anywhere in the graph according to the given invariant distribution. We prove TCL's expected degree distribution is equal to the degree distribution of the original graph, while being able to capture the clustering present in the network. The single parameter required by our model can be learned in seconds on graphs with millions of edges, while networks can be generated in time that is linear in the number of edges. We demonstrate the performance TCL on four real- world social networks, including an email dataset with hundreds of thousands of nodes and millions of edges, showing TCL generates graphs that match the degree distribution, clustering coefficients and hop plots of the original networks.
|
The importance of the clustering coefficient has been demonstrated by Watts and Strogatz @cite_9 . In particular, they show that small world networks (including social networks) are characterized by a short path length and large clustering coefficient. One recent algorithm ( @cite_14 ) matches these statistics by putting together nodes with similar degrees and generating Erds-Rnyi graphs for each group. The groups are then tied together. However, this algorithm needs a parameter to be set manually to work. Existing models that can generate clustering in the network generally do not have a training algorithm.
|
{
"cite_N": [
"@cite_9",
"@cite_14"
],
"mid": [
"2112090702",
"1985339741"
],
"abstract": [
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"Community structure plays a significant role in the analysis of social networks and similar graphs, yet this structure is little understood and not well captured by most models. We formally define a community to be a subgraph that is internally highly connected and has no deeper substructure. We use tools of combinatorics to show that any such community must contain a dense Erd˝ os-R´ enyi (ER) subgraph. Based on mathematical arguments, we hypothesize that any graph with a heavy-tailed degree distribution and community structure must contain a scale-free collection of dense ER subgraphs. These theoretical observations corroborate well with empirical evidence. From this, we propose the Block Two-Level Erd˝ os-R´ enyi (BTER) model, and demonstrate that it accurately captures the observable properties of many real-world social networks."
]
}
|
1202.4805
|
1530957817
|
A key challenge within the social network literature is the problem of network generation - that is, how can we create synthetic networks that match characteristics traditionally found in most real world networks? Important characteristics that are present in social networks include a power law degree distribution, small diameter and large amounts of clustering; however, most current network generators, such as the Chung Lu and Kronecker models, largely ignore the clustering present in a graph and choose to focus on preserving other network statistics, such as the power law distribution. Models such as the exponential random graph model have a transitivity parameter, but are computationally difficult to learn, making scaling to large real world networks intractable. In this work, we propose an extension to the Chung Lu ran- dom graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion of a random transitive edge. That is, with some probability it will choose to connect to a node exactly two hops away, having been introduced to a 'friend of a friend'. In all other cases it will follow the standard Chung Lu model, selecting a 'random surfer' from anywhere in the graph according to the given invariant distribution. We prove TCL's expected degree distribution is equal to the degree distribution of the original graph, while being able to capture the clustering present in the network. The single parameter required by our model can be learned in seconds on graphs with millions of edges, while networks can be generated in time that is linear in the number of edges. We demonstrate the performance TCL on four real- world social networks, including an email dataset with hundreds of thousands of nodes and millions of edges, showing TCL generates graphs that match the degree distribution, clustering coefficients and hop plots of the original networks.
|
One method that can model clustering and can learn the associated parameter is the Exponential Random Graph Model (ERGM) @cite_0 . ERGMs define a probability distribution over the set of possible graphs with a log-linear model that uses feature counts of local graph properties. However, these models are typically hard to train as each update of the Fisher scoring function takes @math . With real-world networks numbering in the hundreds of thousands if not millions of nodes, this makes ERGMs impossible to fit.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2076844992"
],
"abstract": [
"Spanning nearly sixty years of research, statistical network analysis has passed through (at least) two generations of researchers and models. Beginning in the late 1930's, the first generation of research dealt with the distribution of various network statistics, under a variety of null models. The second generation, beginning in the 1970's and continuing into the 1980's, concerned models, usually for probabilities of relational ties among very small subsets of actors, in which various simple substantive tendencies were parameterized. Much of this research, most of which utilized log linear models, first appeared in applied statistics publications."
]
}
|
1202.4805
|
1530957817
|
A key challenge within the social network literature is the problem of network generation - that is, how can we create synthetic networks that match characteristics traditionally found in most real world networks? Important characteristics that are present in social networks include a power law degree distribution, small diameter and large amounts of clustering; however, most current network generators, such as the Chung Lu and Kronecker models, largely ignore the clustering present in a graph and choose to focus on preserving other network statistics, such as the power law distribution. Models such as the exponential random graph model have a transitivity parameter, but are computationally difficult to learn, making scaling to large real world networks intractable. In this work, we propose an extension to the Chung Lu ran- dom graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion of a random transitive edge. That is, with some probability it will choose to connect to a node exactly two hops away, having been introduced to a 'friend of a friend'. In all other cases it will follow the standard Chung Lu model, selecting a 'random surfer' from anywhere in the graph according to the given invariant distribution. We prove TCL's expected degree distribution is equal to the degree distribution of the original graph, while being able to capture the clustering present in the network. The single parameter required by our model can be learned in seconds on graphs with millions of edges, while networks can be generated in time that is linear in the number of edges. We demonstrate the performance TCL on four real- world social networks, including an email dataset with hundreds of thousands of nodes and millions of edges, showing TCL generates graphs that match the degree distribution, clustering coefficients and hop plots of the original networks.
|
Another method is the Kronecker product graph model (KPGM), a scalable algorithm for learning models of large-scale networks that empirically preserves a wide range of global properties of interest, such as degree distributions, and path-length distributions @cite_5 . Thanks to these characteristics, KPGM has been selected as a generation algorithm for the Graph 500 Supercomputer Benchmark @cite_15 .
|
{
"cite_N": [
"@cite_5",
"@cite_15"
],
"mid": [
"2112681514",
"2949321510"
],
"abstract": [
"How can we generate realistic networks? In addition, how can we do so with a mathematically tractable model that allows for rigorous analysis of network properties? Real networks exhibit a long list of surprising properties: Heavy tails for the in- and out-degree distribution, heavy tails for the eigenvalues and eigenvectors, small diameters, and densification and shrinking diameters over time. Current network models and generators either fail to match several of the above properties, are complicated to analyze mathematically, or both. Here we propose a generative model for networks that is both mathematically tractable and can generate networks that have all the above mentioned structural properties. Our main idea here is to use a non-standard matrix operation, the Kronecker product, to generate graphs which we refer to as \"Kronecker graphs\". First, we show that Kronecker graphs naturally obey common network properties. In fact, we rigorously prove that they do so. We also provide empirical evidence showing that Kronecker graphs can effectively model the structure of real networks. We then present KRONFIT, a fast and scalable algorithm for fitting the Kronecker graph generation model to large real networks. A naive approach to fitting would take super-exponential time. In contrast, KRONFIT takes linear time, by exploiting the structure of Kronecker matrix multiplication and by using statistical simulation techniques. Experiments on a wide range of large real and synthetic networks show that KRONFIT finds accurate parameters that very well mimic the properties of target networks. In fact, using just four parameters we can accurately model several aspects of global network structure. Once fitted, the model parameters can be used to gain insights about the network structure, and the resulting synthetic graphs can be used for null-models, anonymization, extrapolations, and graph summarization.",
"The analysis of massive graphs is now becoming a very important part of science and industrial research. This has led to the construction of a large variety of graph models, each with their own advantages. The Stochastic Kronecker Graph (SKG) model has been chosen by the Graph500 steering committee to create supercomputer benchmarks for graph algorithms. The major reasons for this are its easy parallelization and ability to mirror real data. Although SKG is easy to implement, there is little understanding of the properties and behavior of this model. We show that the parallel variant of the edge-configuration model given by Chung and Lu (referred to as CL) is notably similar to the SKG model. The graph properties of an SKG are extremely close to those of a CL graph generated with the appropriate parameters. Indeed, the final probability matrix used by SKG is almost identical to that of a CL model. This implies that the graph distribution represented by SKG is almost the same as that given by a CL model. We also show that when it comes to fitting real data, CL performs as well as SKG based on empirical studies of graph properties. CL has the added benefit of a trivially simple fitting procedure and exactly matching the degree distribution. Our results suggest that users of the SKG model should consider the CL model because of its similar properties, simpler structure, and ability to fit a wider range of degree distributions. At the very least, CL is a good control model to compare against."
]
}
|
1202.4805
|
1530957817
|
A key challenge within the social network literature is the problem of network generation - that is, how can we create synthetic networks that match characteristics traditionally found in most real world networks? Important characteristics that are present in social networks include a power law degree distribution, small diameter and large amounts of clustering; however, most current network generators, such as the Chung Lu and Kronecker models, largely ignore the clustering present in a graph and choose to focus on preserving other network statistics, such as the power law distribution. Models such as the exponential random graph model have a transitivity parameter, but are computationally difficult to learn, making scaling to large real world networks intractable. In this work, we propose an extension to the Chung Lu ran- dom graph model, the Transitive Chung Lu (TCL) model, which incorporates the notion of a random transitive edge. That is, with some probability it will choose to connect to a node exactly two hops away, having been introduced to a 'friend of a friend'. In all other cases it will follow the standard Chung Lu model, selecting a 'random surfer' from anywhere in the graph according to the given invariant distribution. We prove TCL's expected degree distribution is equal to the degree distribution of the original graph, while being able to capture the clustering present in the network. The single parameter required by our model can be learned in seconds on graphs with millions of edges, while networks can be generated in time that is linear in the number of edges. We demonstrate the performance TCL on four real- world social networks, including an email dataset with hundreds of thousands of nodes and millions of edges, showing TCL generates graphs that match the degree distribution, clustering coefficients and hop plots of the original networks.
|
The KPGM starts with a initial square matrix @math of size @math , where each cell value is a probability. To generate a graph, the algorithm uses k Kronecker multiplications to grow until a determined size (obtaining @math with @math rows and columns). Each edge is then independently sampled using a Bernoulli distribution with parameter @math . A rough implementation of this algorithm has time @math , but improved algorithms can generate a network in @math , where M is the number of edges in the network @cite_5 . According to @cite_5 , the learning time is linear in the number of edges.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2112681514"
],
"abstract": [
"How can we generate realistic networks? In addition, how can we do so with a mathematically tractable model that allows for rigorous analysis of network properties? Real networks exhibit a long list of surprising properties: Heavy tails for the in- and out-degree distribution, heavy tails for the eigenvalues and eigenvectors, small diameters, and densification and shrinking diameters over time. Current network models and generators either fail to match several of the above properties, are complicated to analyze mathematically, or both. Here we propose a generative model for networks that is both mathematically tractable and can generate networks that have all the above mentioned structural properties. Our main idea here is to use a non-standard matrix operation, the Kronecker product, to generate graphs which we refer to as \"Kronecker graphs\". First, we show that Kronecker graphs naturally obey common network properties. In fact, we rigorously prove that they do so. We also provide empirical evidence showing that Kronecker graphs can effectively model the structure of real networks. We then present KRONFIT, a fast and scalable algorithm for fitting the Kronecker graph generation model to large real networks. A naive approach to fitting would take super-exponential time. In contrast, KRONFIT takes linear time, by exploiting the structure of Kronecker matrix multiplication and by using statistical simulation techniques. Experiments on a wide range of large real and synthetic networks show that KRONFIT finds accurate parameters that very well mimic the properties of target networks. In fact, using just four parameters we can accurately model several aspects of global network structure. Once fitted, the model parameters can be used to gain insights about the network structure, and the resulting synthetic graphs can be used for null-models, anonymization, extrapolations, and graph summarization."
]
}
|
1202.5298
|
2951137439
|
We study the minmax optimization problem introduced in [22] for computing policies for batch mode reinforcement learning in a deterministic setting. First, we show that this problem is NP-hard. In the two-stage case, we provide two relaxation schemes. The first relaxation scheme works by dropping some constraints in order to obtain a problem that is solvable in polynomial time. The second relaxation scheme, based on a Lagrangian relaxation where all constraints are dualized, leads to a conic quadratic programming problem. We also theoretically prove and empirically illustrate that both relaxation schemes provide better results than those given in [22].
|
Several works have already been built upon @math paradigms for computing policies in a RL setting. In stochastic frameworks, @math approaches are often successful for deriving robust solutions with respect to uncertainties in the (parametric) representation of the probability distributions associated with the environment @cite_6 . In the context where several agents interact with each other in the same environment, @math approaches appear to be efficient strategies for designing policies that maximize one agent's reward given the worst adversarial behavior of the other agents. @cite_1 @cite_2 . They have also received some attention for solving partially observable Markov decision processes @cite_17 @cite_10 .
|
{
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"1542941925",
"2165622730",
"1572814100",
"2083104145",
"1978790532"
],
"abstract": [
"In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.",
"Markov decision processes are an effective tool in modeling decision making in uncertain dynamic environments. Because the parameters of these models typically are estimated from data or learned from experience, it is not surprising that the actual performance of a chosen strategy often differs significantly from the designer's initial expectations due to unavoidable modeling ambiguity. In this paper, we present a set of percentile criteria that are conceptually natural and representative of the trade-off between optimistic and pessimistic views of the question. We study the use of these criteria under different forms of uncertainty for both the rewards and the transitions. Some forms are shown to be efficiently solvable and others highly intractable. In each case, we outline solution concepts that take parametric uncertainty into account in the process of decision making.",
"Game playing has always been considered an intellectual activity requiring a good level of intelligence This paper focuses on Adversarial Tetris, a variation of the well-known Tetris game, introduced at the 3rd International Reinforcement Learning Competition in 2009 In Adversarial Tetris the mission of the player to complete as many lines as possible is actively hindered by an unknown adversary who selects the falling tetraminoes in ways that make the game harder for the player In addition, there are boards of different sizes and learning ability is tested over a variety of boards and adversaries This paper describes the design and implementation of an agent capable of learning to improve his strategy against any adversary and any board size The agent employs MiniMax search enhanced with Alpha-Beta pruning for looking ahead within the game tree and a variation of the Least-Squares Temporal Difference Learning (LSTD) algorithm for learning an appropriate state evaluation function over a small set of features The learned strategies exhibit good performance over a wide range of boards and adversaries.",
"Abstract Real-time heuristic search methods interleave planning and plan executions and plan only in the part of the domain around the current state of the agents. So far, real-time heuristic search methods have mostly been applied to deterministic planning tasks. In this article, we argue that real-time heuristic search methods can efficiently solve nondeterministic planning tasks. We introduce Min-Max Learning Real-Time A∗ (Min-Max LRTA∗), a real-time heuristic search method that generalizes Korf's LRTA∗ to nondeterministic domains, and apply it to robot-navigation tasks in mazes, where the robots know the maze but do not know their initial position and orientation (pose). These planning tasks can be modeled as planning tasks in nondeterministic domains whose states are sets of poses. We show that Min-Max LRTA∗ solves the robot-navigation tasks fast, converges quickly, and requires only a small amount of memory.",
"Abstract The partially observable Markov decision process (POMDP) model of environments was first explored in the engineering and operations research communities 40 years ago. More recently, the model has been embraced by researchers in artificial intelligence and machine learning, leading to a flurry of solution algorithms that can identify optimal or near-optimal behavior in many environments represented as POMDPs. The purpose of this article is to introduce the POMDP model to behavioral scientists who may wish to apply the framework to the problem of understanding normative behavior in experimental settings. The article includes concrete examples using a publicly-available POMDP solution package."
]
}
|
1202.5298
|
2951137439
|
We study the minmax optimization problem introduced in [22] for computing policies for batch mode reinforcement learning in a deterministic setting. First, we show that this problem is NP-hard. In the two-stage case, we provide two relaxation schemes. The first relaxation scheme works by dropping some constraints in order to obtain a problem that is solvable in polynomial time. The second relaxation scheme, based on a Lagrangian relaxation where all constraints are dualized, leads to a conic quadratic programming problem. We also theoretically prove and empirically illustrate that both relaxation schemes provide better results than those given in [22].
|
The @math approach towards generalization, originally introduced in @cite_20 , implicitly relies on a methodology for computing lower bounds on the worst possible return (considering any compatible environment) in a deterministic setting with a mostly unknown actual environment. In this respect, it is related to other approaches that aim at computing performance guarantees on the returns of inferred policies @cite_33 @cite_18 @cite_38 .
|
{
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_33",
"@cite_20"
],
"mid": [
"2284456400",
"2164985671",
"2159951376",
"1503631637"
],
"abstract": [
"We present a framework for computing bounds for the return of a policy in finite-horizon, continuous-state Markov Decision Processes with bounded state transitions. The state transition bounds can be based on either prior knowledge alone, or on a combination of prior knowledge and data. Our framework uses a piecewise-constant representation of the return bounds and a backwards iteration process. We instantiate this framework for a previously investigated type of prior knowledge --- namely, Lipschitz continuity of the transition function. In this context, we show that the existing bounds of (2009, 2010) can be expressed as a particular instantiation of our framework, by bounding the immediate rewards using Lipschitz continuity and choosing a particular form for the regions in the piecewise-constant representation. We also show how different instantiations of our framework can improve upon their bounds.",
"Because many illnesses show heterogeneous response to treatment, there is increasing interest in individualizing treatment to patients [Arch. Gen. Psychiatry 66 (2009) 128―133]. An individualized treatment rule is a decision rule that recommends treatment according to patient characteristics. We consider the use of clinical trial data in the construction of an individualized treatment rule leading to highest mean response. This is a difficult computational problem because the objective function is the expectation of a weighted indicator function that is nonconcave in the parameters. Furthermore, there are frequently many pretreatment variables that may or may not be useful in constructing an optimal individualized treatment rule, yet cost and interpretability considerations imply that only a few variables should be used by the individualized treatment rule. To address these challenges, we consider estimation based on l 1 -penalized least squares. This approach is justified via a finite sample upper bound on the difference between the mean response due to the estimated individualized treatment rule and the mean response due to the optimal individualized treatment rule.",
"We consider the bias and variance of value function estimation that are caused by using an empirical model instead of the true model. We analyze these bias and variance for Markov processes from a classical (frequentist) statistical point of view, and in a Bayesian setting. Using a second order approximation, we provide explicit expressions for the bias and variance in terms of the transition counts and the reward statistics. We present supporting experiments with artificial Markov chains and with a large transactional database provided by a mail-order catalog firm.",
"In this paper, we introduce a min max approach for addressing the generalization problem in Reinforcement Learning. The min max approach works by determining a sequence of actions that maximizes the worst return that could possibly be obtained considering any dynamics and reward function compatible with the sample of trajectories and some prior knowledge on the environment. We consider the particular case of deterministic Lipschitz continuous environments over continuous state spaces, finite action spaces, and a finite optimization horizon. We discuss the non-triviality of computing an exact solution of the min max problem even after reformulating it so as to avoid search in function spaces. For addressing this problem, we propose to replace, inside this min max problem, the search for the worst environment given a sequence of actions by an expression that lower bounds the worst return that can be obtained for a given sequence of actions. This lower bound has a tightness that depends on the sample sparsity. From there, we propose an algorithm of polynomial complexity that returns a sequence of actions leading to the maximization of this lower bound. We give a condition on the sample sparsity ensuring that, for a given initial state, the proposed algorithm produces an optimal sequence of actions in open-loop. Our experiments show that this algorithm can lead to more cautious policies than algorithms combining dynamic programming with function approximators."
]
}
|
1202.5230
|
1614522028
|
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of a graph. Some of the most useful graph metrics, especially those measuring social cohesion, are based on triangles. Despite the importance of these triadic measures, associated algorithms can be extremely expensive. We propose a new method based on wedge sampling. This versatile technique allows for the fast and accurate approximation of all current variants of clustering coefficients and enables rapid uniform sampling of the triangles of a graph. Our methods come with provable and practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state-of-the-art, while providing nearly the accuracy of full enumeration. Our results will enable more wide-scale adoption of triadic measures for analysis of extremely large graphs, as demonstrated on several real-world examples.
|
There has been significant work on enumeration of all triangles @cite_7 @cite_6 @cite_17 @cite_19 @cite_28 . Recent work by Cohen @cite_25 and Suri and Vassilvitskii @cite_33 give parallel implementations of these algorithms. @cite_18 give a massively parallel algorithm for computing clustering coefficients. Enumeration algorithms however, can be very expensive, since graphs even of moderate size (millions of vertices) can have an extremely large number of triangles (see e.g., prop ). Eigenvalue trace based methods have been used by Tsourakakis @cite_20 and Avron @cite_23 to compute estimates of the total and per-degree number of triangles. However, computing eigenvalues (even just a few of them) is a compute-intensive task and quickly becomes intractable on large graphs.
|
{
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"2102322109",
"2103314753",
"2055245094",
"2012720017",
"1489509891",
"",
"2342372809",
"2019724001",
"2120595041",
"2016311778"
],
"abstract": [
"In this paper we study the problem of local triangle counting in large graphs. Namely, given a large graph G = (V;E) we want to estimate as accurately as possible the number of triangles incident to every node υ ∈ V in the graph. The problem of computing the global number of triangles in a graph has been considered before, but to our knowledge this is the first paper that addresses the problem of local triangle counting with a focus on the efficiency issues arising in massive graphs. The distribution of the local number of triangles and the related local clustering coefficient can be used in many interesting applications. For example, we show that the measures we compute can help to detect the presence of spamming activity in large-scale Web graphs, as well as to provide useful features to assess content quality in social networks. For computing the local number of triangles we propose two approximation algorithms, which are based on the idea of min-wise independent permutations ( 1998). Our algorithms operate in a semi-streaming fashion, using O(jV j) space in main memory and performing O(log jV j) sequential scans over the edges of the graph. The first algorithm we describe in this paper also uses O(jEj) space in external memory during computation, while the second algorithm uses only main memory. We present the theoretical analysis as well as experimental results in massive graphs demonstrating the practical efficiency of our approach.",
"Analyzing network associations with performance in three study populations, I find that secondhand brokerage--moving information between people to whom one is only connected indirectly--often has little or no value. Brokerage benefits are dramatically concentrated in the immediate network around a person. Why that is so, and conditions under which it is more or less so, are examined. The research implication is that brokerage can be measured with designs in which data are limited to an immediate network. The theory implication is that the social capital of brokerage is a local phenomenon mirrored in the Austrian market metaphor's emphasis on tacit, local knowledge. [ABSTRACT FROM AUTHOR] Copyright of Academy of Management Journal is the property of Academy of Management and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)",
"In this paper we introduce a new simple strategy into edge-searching of a graph, which is useful to the various subgraph listing problems. Applying the strategy, we obtain the following four algorithms. The first one lists all the triangles in a graph G in @math time, where m is the number of edges of G and @math the arboricity of G. The second finds all the quadrangles in @math time. Since @math is at most three for a planar graph G, both run in linear time for a planar graph. The third lists all the complete subgraphs @math of order l in @math time. The fourth lists all the cliques in @math time per clique. All the algorithms require linear space. We also establish an upper bound on @math for a graph @math , where n is the number of vertices in G.",
"Triangle listing is one of the fundamental algorithmic problems whose solution has numerous applications especially in the analysis of complex networks, such as the computation of clustering coefficient, transitivity, triangular connectivity, etc. Existing algorithms for triangle listing are mainly in-memory algorithms, whose performance cannot scale with the massive volume of today's fast growing networks. When the input graph cannot fit into main memory, triangle listing requires random disk accesses that can incur prohibitively large I O cost. Some streaming and sampling algorithms have been proposed but these are approximation algorithms. We propose an I O-efficient algorithm for triangle listing. Our algorithm is exact and avoids random disk access. Our results show that our algorithm is scalable and outperforms the state-of-the-art local triangle estimation algorithm.",
"In the past, the fundamental graph problem of triangle counting and listing has been studied intensively from a theoretical point of view. Recently, triangle counting has also become a widely used tool in network analysis. Due to the very large size of networks like the Internet, WWW or social networks, the efficiency of algorithms for triangle counting and listing is an important issue. The main intention of this work is to evaluate the practicability of triangle counting and listing in very large graphs with various degree distributions. We give a surprisingly simple enhancement of a well known algorithm that performs best, and makes triangle listing and counting in huge networks feasible. This paper is a condensed presentation of [SW05].",
"",
"Triangle counting is an important problem in graph mining with several real-world applications. Interesting metrics, such as the clustering coefficient and the transitivity ratio, involve computing the number of triangles. Furthermore, several interesting graph mining applications rely on computing the number of triangles in a large-scale graph. However, exact triangle counting is expensive and memory consuming, and current approximation algorithms are unsatisfactory and not practical for very large-scale graphs. In this paper we present a new highly-parallel randomized algorithm for approximating the number of triangles in an undirected graph. Our algorithm uses a well-known relation between the number of triangles and the trace of the cubed adjacency matrix. A Monte-Carlo simulation is used to estimate this quantity. Each sample requires O(|E|) time and O(ǫ 2 log(1 δ)ρ(G) 2 ) samples are required to guarantee an (ǫ, δ)-approximation, where ρ(G) is a measure of the triangle sparsity of G (ρ(G) is not necessarily small). Our algorithm requires only O(|V |) space in order to work efficiently. We present experiments that demonstrate that in practice usually only O(log 2 |V |) samples are required to get good approximations for graphs frequently encountered in data-mining tasks, and that our algorithm is competitive with state-of-the-art approximate triangle counting methods both in terms of accuracy and in terms of running-time. The use of Monte-Carlo simulation support parallelization well: our algorithm is embarrassingly parallel with a critical path of only O(|E|), achievable on as few as O(log 2 |V |) processors.",
"As the size of graphs for analysis continues to grow, methods of graph processing that scale well have become increasingly important. One way to handle large datasets is to disperse them across an array of networked computers, each of which implements simple sorting and accumulating, or MapReduce, operations. This cloud computing approach offers many attractive features. If decomposing useful graph operations in terms of MapReduce cycles is possible, it provides incentive for seriously considering cloud computing. Moreover, it offers a way to handle a large graph on a single machine that can't hold the entire graph as well as enables streaming graph processing. This article examines this possibility.",
"How can we quickly find the number of triangles in a large graph, without actually counting them? Triangles are important for real world social networks, lying at the heart of the clustering coefficient and of the transitivity ratio. However, straight-forward and even approximate counting algorithms can be slow, trying to execute or approximate the equivalent of a 3-way database join. In this paper, we provide two algorithms, the eigentriangle for counting the total number of triangles in a graph, and the eigentrianglelocal algorithm that gives the count of triangles that contain a desired node. Additional contributions include the following: (a) We show that both algorithms achieve excellent accuracy, with up to sime 1000x faster execution time, on several, real graphs and (b) we discover two new power laws (degree-triangle and triangleparticipation laws) with surprising properties.",
"Finding, counting and or listing triangles (three vertices with three edges) in massive graphs are natural fundamental problems, which have recently received much attention because of their importance in complex network analysis. Here we provide a detailed survey of proposed main-memory solutions to these problems, in a unified way. We note that previous authors have paid surprisingly little attention to space complexity of main-memory solutions, despite its both fundamental and practical interest. We therefore detail space complexities of known algorithms and discuss their implications. We also present new algorithms which are time optimal for triangle listing and beats previous algorithms concerning space needs. They have the additional advantage of performing better on power-law graphs, which we also detail. We finally show with an experimental study that these two algorithms perform very well in practice, allowing us to handle cases which were previously out of reach."
]
}
|
1202.5230
|
1614522028
|
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of a graph. Some of the most useful graph metrics, especially those measuring social cohesion, are based on triangles. Despite the importance of these triadic measures, associated algorithms can be extremely expensive. We propose a new method based on wedge sampling. This versatile technique allows for the fast and accurate approximation of all current variants of clustering coefficients and enables rapid uniform sampling of the triangles of a graph. Our methods come with provable and practical time-approximation tradeoffs for all computations. We provide extensive results that show our methods are orders of magnitude faster than the state-of-the-art, while providing nearly the accuracy of full enumeration. Our results will enable more wide-scale adoption of triadic measures for analysis of extremely large graphs, as demonstrated on several real-world examples.
|
Most relevant to our work are sampling mechanisms. @cite_9 started the use of sparsification methods, the most important of which is Doulion @cite_3 . This method sparsifies the graph by keeping each edge with probability @math ; counts the triangles in the sparsified graph; and multiplies this count by @math to predict the number of triangles in the original graph. Various theoretical analyses of this algorithm (and its variants) have been proposed @cite_1 @cite_5 @cite_12 . One of the main benefits of Doulion is that it reduces large graphs to smaller ones that can be loaded into memory. However, their estimate can suffer from high variance @cite_2 . Alternative sampling mechanisms have been proposed for streaming and semi-streaming algorithms @cite_24 @cite_31 @cite_8 @cite_30 .
|
{
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_12"
],
"mid": [
"",
"",
"2112090702",
"1542957328",
"2158432527",
"2002576896",
"89448491",
"2050137450",
"",
"2002205566"
],
"abstract": [
"",
"",
"Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.",
"In this paper we present an efficient triangle counting algorithm which can be adapted to the semistreaming model [12]. The key idea of our algorithm is to combine the sampling algorithm of [31,32] and the partitioning of the set of vertices into a high degree and a low degree subset respectively as in [1], treating each set appropriately. We obtain a running time (O ( m + m^ 3 2 n t ^2 ) ) and an e approximation (multiplicative error), where n is the number of vertices, m the number of edges and Δ the maximum number of triangles an edge is contained. Furthermore, we show how this algorithm can be adapted to the semistreaming model with space usage (O (m^ 1 2 n + m^ 3 2 n t ^2 ) ) and a constant number of passes (three) over the graph stream. We apply our methods in various networks with several millions of edges and we obtain excellent results. Finally, we propose a random projection based method for triangle counting and provide a sufficient condition to obtain an estimate with low variance.",
"Counting the number of triangles in a graph is a beautiful algorithmic problem which has gained importance over the last years due to its significant role in complex network analysis. Metrics frequently computed such as the clustering coefficient and the transitivity ratio involve the execution of a triangle counting algorithm. Furthermore, several interesting graph mining applications rely on computing the number of triangles in the graph of interest. In this paper, we focus on the problem of counting triangles in a graph. We propose a practical method, out of which all triangle counting algorithms can potentially benefit. Using a straightforward triangle counting algorithm as a black box, we performed 166 experiments on real-world networks and on synthetic datasets as well, where we show that our method works with high accuracy, typically more than 99 and gives significant speedups, resulting in even ≈ 130 times faster performance.",
"We introduce reductions in the streaming model as a tool in the design of streaming algorithms. We develop the concept of list-efficient streaming algorithms that are essential to the design of efficient streaming algorithms through reductions.Our results include a suite of list-efficient streaming algorithms for basic statistical primitives. Using the reduction paradigm along with these tools, we design streaming algorithms for approximately counting the number of triangles in a graph presented as a stream.A specific highlight of our work is the first algorithm for the number of distinct elements in a data stream that achieves arbitrary approximation factors. (Independently, Trevisan [Tre01] has solved this problem via a different approach; our algorithm has the advantage of being list-efficient.)",
"The problem of counting the number of triangles in a graph has gained importance in the last few years due to its importance in many data mining applications. Recently, Tsourkakis, et al proposed DOULION, which is based on a simple sampling idea but works very well on many of the important graphs. In this preliminary report, we show that DOULION may not be very correct on special cases of graphs and argue that it may not fulfill the main purpose of the triangle counting problem for real-world graphs. We then present improvements on DOULION and show that it works better, much better in some cases, than DOULION.",
"Modern search engines answer keyword-based queries extremely efficiently. The impressive speed is due to clever inverted index structures, caching, a domain-independent knowledge of strings, and thousands of machines. Several research efforts have attempted to generalize keyword search to keytree and keygraph searching, because trees and graphs have many applications in next-generation database systems. This paper surveys both algorithms and applications, giving some emphasis to our own work.",
"",
"In this note we introduce a new randomized algorithm for counting triangles in graphs. We show that under mild conditions, the estimate of our algorithm is strongly concentrated around the true number of triangles. Specifically, let G be a graph with n vertices, t triangles and let @D be the maximum number of triangles an edge of G is contained in. Our randomized algorithm colors the vertices of G with N=1 p colors uniformly at random, counts monochromatic triangles, i.e., triangles whose vertices have the same color, and scales that count appropriately. We show that if p>=max(@Dlognt,lognt) then for any constant @e>0 our unbiased estimate T is concentrated around its expectation, i.e., Pr[|T-E[T]|>=@eE[T]]=o(1). Finally, our algorithm is amenable to being parallelized. We present a simple MapReduce implementation of our algorithm."
]
}
|
1202.4144
|
2066630725
|
The KE inference system is a tableau method developed by Marco Mondadori which was presented as an improvement, in the computational efficiency sense, over Analytic Tableaux. In the literature, there is no description of a theorem prover based on the KE method for the C"1 paraconsistent logic. Paraconsistent logics have several applications, such as in robot control and medicine. These applications could benefit from the existence of such a prover. We present a sound and complete KE system for C"1, an informal specification of a strategy for the C"1 prover as well as problem families that can be used to evaluate provers for C"1. The C"1KE system and the strategy described in this paper will be used to implement a KE based prover for C"1, which will be useful for those who study and apply paraconsistent logics.
|
Another tableau system appears in @cite_19 . It was obtained by using a general method for constructing tableau systems @cite_18 . Although this system has a PB branching rule, a feature of systems, is not a system. To be a system it should have only one branching rule, but it has 8 branching rules. Just like the system in @cite_20 , it is based on AT. However, it does not have rules that lead to infinite loops. We do not know of any implementation of this method.
|
{
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_20"
],
"mid": [
"1481398220",
"2148502788",
"1536962450"
],
"abstract": [
"Segundo a pressuposicao de consistencia classica, as contradicoes tem um cara[c]ter explosivo; uma vez que estejam presentes em uma teoria, tudo vale, e nenhum raciocinio sensato pode entao ter lugar. Uma logica e paraconsistente se ela rejeita uma tal pressuposicao, e aceita ao inves que algumas teorias inconsistentes conquanto nao-triviais facam perfeito sentido. A? Logicas da Inconsistencia Formal, LIFs, formam uma classe de logicas paraconsistentes particularmente expressivas nas quais a nocao meta-teonca de consistencia pode ser internalizada ao nivel da linguagem obje[c]to. Como consequencia, as LIFs sao capazes de recapturar o raciocinio consistente pelo acrescimo de assuncoes de consistencia apropriadas. Assim, por exemplo, enquanto regras classicas tais como o silogismo disjuntivo (de A e nao-,4)-ou-13, infira B) estao fadadas a falhar numa logica paraconsistente (pois A e (nao-A) poderiam ambas ser verdadeiras para algum A, independentemente de B), elas podem ser recuperadas por uma LIF se o conjunto das premissas for ampliado pela presuncao de que estamos raciocinando em um ambiente consistente (neste caso, pelo acrescimo de (consistente-.A) como uma hipotese adicional da regra). A presente monografia introduz as LIFs e apresenta diversas ilustracoes destas logicas e de suas propriedades, mostrando que tais logicas constituem com efeito a maior parte dos sistemas paraconsistentes da literatura. Diversas formas de se efe[c]tuar a recaptura do raciocinio consistente dentro de tais sistemas inconsistentes sao tambem ilustradas Em cada caso, interpretacoes em termos de semânticas polivalentes, de traducoes possiveis ou modais sao fornecidas, e os problemas relacionados a provisao de contrapartidas algebricas para tais logicas sao examinados. Uma abordagem formal abstra[cjta e proposta para todas as definicoes relacionadas e uma extensa investigacao e feita sobre os principios logicos e as propriedades positivas e negativas da negacao Abstract",
"The Polish logician Roman Suszko has extensively pleaded in the 1970s for a restatement of the notion of many-valuedness. According to him, as he would often repeat, “there are but two logical values, true and false.” As a matter of fact, a result by Wojcicki-Lindenbaum shows that any tarskian logic has a many-valued semantics, and results by Suszko-da Costa-Scott show that any many-valued semantics can be reduced to a two-valued one. So, why should one even consider using logics with more than two values? Because, we argue, one has to decide how to deal with bivalence and settle down the trade-off between logical 2-valuedness and truth-functionality, from a pragmatical standpoint.",
""
]
}
|
1202.4144
|
2066630725
|
The KE inference system is a tableau method developed by Marco Mondadori which was presented as an improvement, in the computational efficiency sense, over Analytic Tableaux. In the literature, there is no description of a theorem prover based on the KE method for the C"1 paraconsistent logic. Paraconsistent logics have several applications, such as in robot control and medicine. These applications could benefit from the existence of such a prover. We present a sound and complete KE system for C"1, an informal specification of a strategy for the C"1 prover as well as problem families that can be used to evaluate provers for C"1. The C"1KE system and the strategy described in this paper will be used to implement a KE based prover for C"1, which will be useful for those who study and apply paraconsistent logics.
|
In @cite_7 , tableau systems for several logics of the @math hierarchy were presented. The tableau system presented there is also based on AT. While in the previous systems @math was applied whenever necessary to generate the branches of the tableau, this system has specific rules to directly deal with all operators, including @math ''. However, as it is based on the analytic tableau method, it also has too many (six) branching rules. We also do not know of any implementation of this method.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2057199660"
],
"abstract": [
"In this paper we present a new hierarchy of analytical tableaux systems TNDC\"n,1@?n =1, as primitive operators, differently to what has been done in the literature, where these operators are usually defined operators. We prove a version of Cut Rule for the TNDC\"n,1@?n<@w, and also prove that these systems are logically equivalent to the corresponding systems C\"n,1@?n<@w. The systems TNDC\"n constitute completely automated theorem proving systems for the systems of da Costa's hierarchy C\"n,1@?n<@w."
]
}
|
1202.4144
|
2066630725
|
The KE inference system is a tableau method developed by Marco Mondadori which was presented as an improvement, in the computational efficiency sense, over Analytic Tableaux. In the literature, there is no description of a theorem prover based on the KE method for the C"1 paraconsistent logic. Paraconsistent logics have several applications, such as in robot control and medicine. These applications could benefit from the existence of such a prover. We present a sound and complete KE system for C"1, an informal specification of a strategy for the C"1 prover as well as problem families that can be used to evaluate provers for C"1. The C"1KE system and the strategy described in this paper will be used to implement a KE based prover for C"1, which will be useful for those who study and apply paraconsistent logics.
|
In @cite_19 , another tableau system (obtained via a uniform procedure) was presented. It is based on analytic tableaux and has too many branching rules.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"1481398220"
],
"abstract": [
"Segundo a pressuposicao de consistencia classica, as contradicoes tem um cara[c]ter explosivo; uma vez que estejam presentes em uma teoria, tudo vale, e nenhum raciocinio sensato pode entao ter lugar. Uma logica e paraconsistente se ela rejeita uma tal pressuposicao, e aceita ao inves que algumas teorias inconsistentes conquanto nao-triviais facam perfeito sentido. A? Logicas da Inconsistencia Formal, LIFs, formam uma classe de logicas paraconsistentes particularmente expressivas nas quais a nocao meta-teonca de consistencia pode ser internalizada ao nivel da linguagem obje[c]to. Como consequencia, as LIFs sao capazes de recapturar o raciocinio consistente pelo acrescimo de assuncoes de consistencia apropriadas. Assim, por exemplo, enquanto regras classicas tais como o silogismo disjuntivo (de A e nao-,4)-ou-13, infira B) estao fadadas a falhar numa logica paraconsistente (pois A e (nao-A) poderiam ambas ser verdadeiras para algum A, independentemente de B), elas podem ser recuperadas por uma LIF se o conjunto das premissas for ampliado pela presuncao de que estamos raciocinando em um ambiente consistente (neste caso, pelo acrescimo de (consistente-.A) como uma hipotese adicional da regra). A presente monografia introduz as LIFs e apresenta diversas ilustracoes destas logicas e de suas propriedades, mostrando que tais logicas constituem com efeito a maior parte dos sistemas paraconsistentes da literatura. Diversas formas de se efe[c]tuar a recaptura do raciocinio consistente dentro de tais sistemas inconsistentes sao tambem ilustradas Em cada caso, interpretacoes em termos de semânticas polivalentes, de traducoes possiveis ou modais sao fornecidas, e os problemas relacionados a provisao de contrapartidas algebricas para tais logicas sao examinados. Uma abordagem formal abstra[cjta e proposta para todas as definicoes relacionadas e uma extensa investigacao e feita sobre os principios logicos e as propriedades positivas e negativas da negacao Abstract"
]
}
|
1202.4425
|
2162652552
|
A relay channel with orthogonal components in which the destination is affected by an interference signal that is non-causally available only at the source is studied. The interference signal has structure in that it is produced by another transmitter communicating with its own destination. Moreover, the interferer is not willing to adjust its communication strategy to minimize the interference. Knowledge of the interferer's signal may be acquired by the source, for instance, by exploiting HARQ retransmissions on the interferer's link. The source can then utilize the relay not only for communicating its own message, but also for cooperative interference mitigation at the destination by informing the relay about the interference signal. Proposed transmission strategies are based on partial decode-and-forward (PDF) relaying and leverage the interference structure. Achievable schemes are derived for discrete memoryless models, Gaussian and Ricean fading channels. Furthermore, optimal strategies are identified in some special cases. Finally, numerical results bring insight into the advantages of utilizing the interference structure at the source, relay or destination.
|
Extensions to the multiuser case were performed by Gel'fand and Pinsker in @cite_7 and by Kim in @cite_8 @cite_24 . In particular, in @cite_8 @cite_24 it is proved that for MACs multi-user versions of GP and DPC, referred to as multi-user GP (MU-GP) and multi user DPC (MU-DPC) respectively, achieve optimal performance. In @cite_11 , Somekh-Baruch considered a memoryless two-user MAC, with the state available only to one of the encoders. The capacity region is shown to be obtained by generalized GP (GGP) and generalized DPC (GDPC). The scenario studied in this paper, but with an i.i.d. state is investigated in @cite_15 , @cite_4 for a Discrete Memoryless (DM) and Gaussian relay channels with an in-band relay and @cite_21 @cite_3 for a DM and Gaussian relay channel with an out-of-band relay where lower and upper bounds on the capacity are derived.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_11"
],
"mid": [
"2070085078",
"",
"2127154945",
"2166002010",
"2102370203",
"",
"2090210622",
"1970816336"
],
"abstract": [
"We consider a state-dependent three-terminal full-duplex relay channel with the channel states noncausally available at only the source, that is, neither at the relay nor at the destination. This model has application to cooperation over certain wireless channels with asymmetric cognition capabilities and cognitive interference relay channels. We establish lower bounds on the channel capacity for both discrete memoryless (DM) and Gaussian cases. For the DM case, the coding scheme for the lower bound uses techniques of rate-splitting at the source, decode-and-forward (DF) relaying, and a Gel'fand-Pinsker-like binning scheme. In this coding scheme, the relay decodes only partially the information sent by the source. Due to the rate-splitting, this lower bound is better than the one obtained by assuming that the relay decodes all the information from the source, that is, full-DF. For the Gaussian case, we consider channel models in which each of the relay node and the destination node experiences on its link an additive Gaussian outside interference. We first focus on the case in which the links to the relay and to the destination are corrupted by the same interference; and then we focus on the case of independent interferences. We also discuss a model with correlated interferences. For each of the first two models, we establish a lower bound on the channel capacity. The coding schemes for the lower bounds use techniques of dirty paper coding or carbon copying onto dirty paper, interference reduction at the source and decode-and-forward relaying. The results reveal that, by opposition to carbon copying onto dirty paper and its root Costa's initial dirty paper coding (DPC), it may be beneficial in our setup that the informed source uses a part of its power to partially cancel the effect of the interference so that the uninformed relay benefits from this cancellation, and so the source benefits in turn.",
"",
"Writing on dirty paper, a variation of the standard additive white Gaussian noise (AWGN) channel with the channel output is considered. The Gaussian noise noncausally known at the transmitter does not affect the capacity of the AWGN channel. This is known as WDP property. The WDP property for three Gaussian multiple user channels with known capacity region is established. The achievable rate regions of corresponding discrete channels is proved. Although those regions may be suboptimal in general, they turn out to be optimal for these Gaussian channels. Alternative proofs can be obtained by successive uses of Costa's WDP coding scheme, from which the WDP property can be established for more general noise and state distributions.",
"We study the capacity of a class of state-controlled relay channels with orthogonal channels from the source to the relay and from the source and relay to the destination. The channel states are assumed to be known, non-causally, to only the source. This model is useful for relaying in the context of cognition and certain interference-aware networks. For the discrete memoryless case, we establish lower bounds on the channel capacity. For the memoryless Gaussian case, we establish lower and upper bounds on the channel capacity. The upper bound is strictly better than the cut-set upper bound, and is tight for certain special cases.",
"We consider a relay channel in the presence of interference which is non-causally available only at the source. The interference signal may have structure, for example it could come from another source communicating with its own destination. However, the external interferer is not willing to adjust its communication strategy to minimize the interference and is considered to be fixed. Two approaches are possible to mitigate the interference: exploiting the structure or treating the interference as unstructured. Using these approaches, we establish bounds for the Gaussian relay channel with orthogonal components and discuss to the importance of exploiting the interference structure in multi-terminal scenarios.",
"",
"We consider a three-terminal state-dependent relay channel with the channel state available non-causally at only the source. Such a model may be of interest for node cooperation in the framework of cognition, i.e., collaborative signal transmission involving cognitive and non-cognitive radios. We study the capacity of this communication model. One principal problem in this setup is caused by the relay's not knowing the channel state. In the discrete memoryless (DM) case, we establish lower bounds on channel capacity. For the Gaussian case, we derive lower and upper bounds on the channel capacity. The upper bound is strictly better than the cut-set upper bound. We show that one of the developed lower bounds comes close to the upper bound, asymptotically, for certain ranges of rates.",
"We generalize the Gel'fand-Pinsker model to encompass the setup of a memoryless multiple-access channel (MAC). According to this setup, only one of the encoders knows the state of the channel (noncausally), which is also unknown to the receiver. Two independent messages are transmitted: a common message and a message transmitted by the informed encoder. We find explicit characterizations of the capacity region with both noncausal and causal state information. Further, we study the noise-free binary case, and we also apply the general formula to the Gaussian case with noncausal channel state information, under an individual power constraint as well as a sum power constraint. In this case, the capacity region is achievable by a generalized writing-on-dirty-paper scheme."
]
}
|
1202.4425
|
2162652552
|
A relay channel with orthogonal components in which the destination is affected by an interference signal that is non-causally available only at the source is studied. The interference signal has structure in that it is produced by another transmitter communicating with its own destination. Moreover, the interferer is not willing to adjust its communication strategy to minimize the interference. Knowledge of the interferer's signal may be acquired by the source, for instance, by exploiting HARQ retransmissions on the interferer's link. The source can then utilize the relay not only for communicating its own message, but also for cooperative interference mitigation at the destination by informing the relay about the interference signal. Proposed transmission strategies are based on partial decode-and-forward (PDF) relaying and leverage the interference structure. Achievable schemes are derived for discrete memoryless models, Gaussian and Ricean fading channels. Furthermore, optimal strategies are identified in some special cases. Finally, numerical results bring insight into the advantages of utilizing the interference structure at the source, relay or destination.
|
With a single dominating interferer, interference structure can be utilized. This was recognized in @cite_18 , where a scenario in which a transmitter-receiver pair communicates in the presence of a single interferer is studied. It is shown therein that using GP coding, and hence treating the interference as if it were unstructured, it is generally suboptimal and with joint decoding at the destination can be beneficial @cite_29 . This aspect is further studied in @cite_28 for a MAC with structured interference available at one encoder, in @cite_3 for a Gaussian relay channel with an out-of-band relay and in @cite_27 for a cognitive Z-interference channel, where extensions of the techniques proposed in @cite_18 are investigated.
|
{
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_29",
"@cite_3",
"@cite_27"
],
"mid": [
"2540574196",
"2064119878",
"2014103254",
"2102370203",
"2962927763"
],
"abstract": [
"Motivated by cognitive radio applications, we consider mitigating the effect of interference by exploiting known properties about its signal structure. Specifically, we analyze communication between a source and destination with an interferer that induces random variations in the source-destination channel. The interferer transmits a sequence chosen uniformly from a randomly generated codebook, which has an i.i.d. structure, or a superposition structure. It is assumed that both the encoder and decoder know the interferer's codebook. We first provide a definition of capacity for these settings. When the encoder knows the interferer's message noncausally, it can use Gel'fand-Pinsker (GP) encoding to precode against interference. Alternatively, it can encode by taking into account that the interference is a codeword, to enable the decoder to decode both messages. It is demonstrated by an example that the latter can outperform GP encoding. Two upper bounds to the performance of this channel are then presented. Next, a more realistic scenario is considered in which the interference is learned at the cognitive encoder causally through a noisy channel. It is shown that for the case of i.i.d. generated interference, this information has no value. In contrast, when the interference is a codeword from an i.i.d. generated codebook, this fact can be exploited to obtain higher rates between the cognitive pair.",
"1Consider an additive Gaussian noise channel affected by an additive interference sequence, taken from a given codebook, which is known non-causally at the transmitter (e.g., via prior decoding). It is known that in this case optimal performance is attained by Dirty Paper Coding, which treats the interference signal as unstructured. In other words, for this example, the knowledge of the specific interferer's codebook at the decoder is not useful in terms of capacity. In this paper, two variations of this basic scenario are presented in which treating interference as unstructured is instead generally suboptimal. In the first case, a second encoder of the source message is present in the system that is not aware of the interferer's sequence, and source and interference messages are uncorrelated; In the second case, the sources encoded by the informed transmitter and interferer are correlated (and an uninformed encoder may or may not be present). Results are given in terms of conditions for achievability for both discrete and Gaussian models of the scenarios discussed above, and corroborated by numerical results. Optimal strategies are also identified in special cases. The conclusions herein point to the importance of exploiting the interfererence structure in multiterminal and source-channel coding scenarios.",
"We consider one-sided interference channel with two independent source-destination pairs, and a cognitive relay with only a link to the destination that observes interference. We first assume the relay non-causally obtains the signal messages (message cognitive) whereas the sources do not cooperate, and provide a general achievable rate region for the Gaussian case. We further analyze a more realistic set-up where instead of source messages, the relay knows the transmitted signals of both sources and is unaware of the codebooks. We investigate the achievable region under this signal cognitive assumption and further obtain a converse, thereby the capacity region under strong relay-interference conditions. Our results suggest that while relaying is usually utilized as an effective technique for boosting received signal quality, in the presence of interference, an equally important task of the relay is to manage the interference.",
"We consider a relay channel in the presence of interference which is non-causally available only at the source. The interference signal may have structure, for example it could come from another source communicating with its own destination. However, the external interferer is not willing to adjust its communication strategy to minimize the interference and is considered to be fixed. Two approaches are possible to mitigate the interference: exploiting the structure or treating the interference as unstructured. Using these approaches, we establish bounds for the Gaussian relay channel with orthogonal components and discuss to the importance of exploiting the interference structure in multi-terminal scenarios.",
"We study the discrete memoryless Z-interference channel where the transmitter of the pair that suffers from interference is cognitive. We first provide an outer bound on the capacity region of this channel. We then show that, when the channel of the transmitter-receiver pair that does not experience interference is deterministic and invertible, our proposed outer bound matches the best known inner bound. The obtained results imply that in the considered channel, superposition encoding at the noncognitive transmitter as well as Gel'fand-Pinsker encoding at the cognitive transmitter is needed in order to minimize the impact of interference. As a byproduct of the obtained capacity region, we obtain the capacity under the generalized Gel'fand-Pinsker setting where a transmitter-receiver pair communicates in the presence of interference noncausally known at the encoder."
]
}
|
1202.3533
|
2096394614
|
We present examples of agent-based and stochastic models of competition and business processes in economics and finance. We start from as simple as possible models, which have microscopic, agent-based, versions and macroscopic treatment in behavior. Microscopic and macroscopic versions of herding model proposed by Kirman and Bass diffusion of new products are considered in this contribution as two basic ideas. Further we demonstrate that general herding behavior can be considered as a background of nonlinear stochastic model of financial fluctuations.
|
In the recent decades there were many attempts to create an agent-based model for the financial markets, yet no model so far is realistic enough and tractable to be considered as an ideal model @cite_29 . One of the best examples of realistic models is so-called Lux and Marchesi model @cite_20 , which is heavily based on the behavioral economics ideas mathematically put down as utility functions for the agents in the market, thus it is considered to be very reasonable and realistic @cite_29 . Yet this model is too complex, namely it has many parameters and complex agent interaction mechanics, to be analytically tractable. Another example of a very complex agent-based model would be Bornholdt's spin model @cite_17 @cite_7 , which is based on a certain interpretation of the well-known Ising model (for the details on the original model see any handbook on statistical physics (ex., @cite_1 )).
|
{
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_1",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"1524135913",
"1526354953",
"1537415400",
"2005611838"
],
"abstract": [
"",
"We present an overview of some representative Agent-Based Models in Economics. We discuss why and how agent-based models represent an important step in order to explain the dynamics and the statistical properties of financial markets beyond the Classical Theory of Economics. We perform a schematic analysis of several models with respect to some specific key categories such as agents' strategies, price evolution, number of agents, etc. In the conclusive part of this review we address some open questions and future perspectives and highlight the conceptual importance of some usually neglected topics, such as non-stationarity and the self-organization of financial markets.",
"1. What is Statistical Mechanics? 2. Random walks and emergent properties 3. Temperature and equilibrium 4. Phase-space dynamics and ergodicity 5. Entropy 6. Free Energies 7. Quantum statistical mechanics 8. Calculation and computation 9. Order parameters, broken symmetry, and topology 10. Correlations, response, and dissipation 11. Abrupt phase transitions 12. Continuous phase transitions APPENDIX: FOURIER METHODS",
"This paper reports statistical analyses performed on simulated data from a stochastic multi-agent model of speculative behaviour in a financial market. The price dynamics resulting from this artificial market process exhibits the same type of scaling laws as do empirical data from stock markets and foreign exchange markets: (i) one observes scaling in the probability distribution of relative price changes with a Pareto exponent around 2.6, (ii) volatility shows significant long-range correlation with a self-similarity parameter H around 0.85. This happens although we assume that news about the intrinsic or fundamental value of the asset follows a white noise process and, hence, incorporation of news about fundamental factors is insufficient to explain either of the characteristics (i) or (ii). As a consequence, in our model, the main stylised facts of financial data originate from the working of the market itself and one need not resort to scaling in unobservable extraneous signals as an explanation of the source of scaling in financial prices. The emergence of power laws can be explained by the existence of a critical state which is repeatedly approached in the course of the system's development.",
"A simple spin model is studied, motivated by the dynamics of traders in a market, where expectation bubbles and crashes occur. The dynamics is governed by interactions, which are frustrated across different scales: while ferromagnetic couplings connect each spin to its local neighborhood, an additional coupling relates each spin to the global magnetization. This new coupling is allowed to be anti-ferromagnetic. The resulting frustration causes a metastable dynamics with intermittency and phases of chaotic dynamics. The model reproduces main observations of real economic markets as power-law distributed returns and clustered volatility."
]
}
|
1202.3533
|
2096394614
|
We present examples of agent-based and stochastic models of competition and business processes in economics and finance. We start from as simple as possible models, which have microscopic, agent-based, versions and macroscopic treatment in behavior. Microscopic and macroscopic versions of herding model proposed by Kirman and Bass diffusion of new products are considered in this contribution as two basic ideas. Further we demonstrate that general herding behavior can be considered as a background of nonlinear stochastic model of financial fluctuations.
|
Some might argue that agent-based models need not to be analytically tractable and in fact that agent-based models are best suited to model phenomena, which is too complex to be analytically described @cite_87 . But the recent developments show that many groups attempt to build a bridge between microscopic and macroscopic models. Possibly one of the earliest attempts to do so started from not so realistic, nor tractable El Farol bar problem'' @cite_16 . This simple model quickly became known as the Minority Game @cite_3 and over few years received analytic treatment @cite_60 . Another prominent simple agent-based model was created by Kirman @cite_8 , which gained broader attention only very recently @cite_38 @cite_85 @cite_2 @cite_57 . In @cite_75 we have given this model and extended analytical treatment and have shown that this model coincides with some prominent macroscopic, namely stochastic, models of the financial markets (see Section of this work for more details). Another interesting development was made by following the aforementioned Bornholdt's spin model, which has recently received an analytical treatment via mean-field formalism @cite_4 .
|
{
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_60",
"@cite_8",
"@cite_87",
"@cite_85",
"@cite_3",
"@cite_57",
"@cite_2",
"@cite_16",
"@cite_75"
],
"mid": [
"1974765950",
"2068054027",
"2015663875",
"2098201978",
"2150704630",
"2151287978",
"2091653681",
"",
"2097397187",
"1575989700",
"2192169618"
],
"abstract": [
"The behavioral origins of the stylized facts of financial returns have been addressed in a growing body of agent-based models of financial markets. While the traditional efficient market viewpoint explains all statistical properties of returns by similar features of the news arrival process, the more recent behavioral finance models explain them as imprints of universal patterns of interaction in these markets. In this paper we contribute to this literature by introducing a very simple agent-based model in which the ubiquitous stylized facts (fat tails, volatility clustering) are emergent properties of the interaction among traders. The simplicity of the model allows us to estimate the underlying parameters, since it is possible to derive a closed form solution for the distribution of returns. We show that the tail shape characterizing the fatness of the unconditional distribution of returns can be directly derived from some structural variables that govern the traders' interactions, namely the herding propensity and the autonomous switching tendency.",
"We analyze a kinetic Ising model with suppressed bulk noise which is a prominent representative of the generalized voter model phase transition. On the one hand we discuss the model in the context of social systems, and opinion formation in the presence of a tunable social temperature. On the other hand we characterize the abrupt phase transition. The system shows non-equilibrium dynamics in the presence of absorbing states. We slightly change the system to get a stationary state model variant exhibiting the same kind of phase transition. Using a Fokker-Planck description and comparing to mean field calculations, we investigate the phase transition, finite size effects and the effect of the absorbing states resulting in a dynamic slowing down.",
"We study analytically a simple game theoretical model of heterogeneous interacting agents. We show that the stationary state of the system is described by the ground state of a disordered spin model which is exactly solvable within the simple replica symmetric ansatz. Such a stationary state differs from the Nash equilibrium where each agent maximizes her own utility. The latter turns out to be characterized by a replica symmetry broken structure. Numerical results fully agree with our analytical findings.",
"This paper offers an explanation of behavior that puzzled entomologists and economists. Ants, faced with two identical food sources, were observed to concentrate more on one of these, but after a period they would turn their attention to the other. The same phenomenon has been observed in humans choosing between restaurants. After discussing the nature of foraging and recruitment behavior in ants, a simple model of stochastic recruitment is suggested. This explains the \"herding\" and \"epidemics\" described in the literature on financial markets as corresponding to the equilibrium distribution of a stochastic process rather than to switching between multiple equilibria.",
"Agent-based modeling is a powerful simulation modeling technique that has seen a number of applications in the last few years, including applications to real-world business problems. After the basic principles of agent-based simulation are briefly introduced, its four areas of application are discussed by using real-world applications: flow simulation, organizational simulation, market simulation, and diffusion simulation. For each category, one or several business applications are described and analyzed.",
"A growing body of recent literature allows for heterogenous trading strategies and limited rationality of agents in behavioral models of financial markets. More and more, this literature has been concerned with the explanation of some of the stylized facts of financial markets. It now seems that some previously mysterious time-series characteristics like fat tails of returns and temporal dependence of volatility can be observed in many of these models as macroscopic patterns resulting from the interaction among different groups of speculative traders. However, most of the available evidence stems from simulation studies of relatively complicated models which do not allow for analytical solutions. In this paper, this line of research is supplemented by analytical solutions of a simple variant of the seminal herding model introduced by Kirman [1993]. Embedding the herding framework into a simple equilibrium asset pricing model, we are able to derive closed-form solutions for the time-variation of higher moments as well as related quantities of interest enabling us to spell out under what circumstances the model gives rise to realistic behavior of the resulting time series",
"A binary game is introduced and analysed. N players have to choose one of the two sides independently and those on the minority side win. Players use a finite set of ad hoc strategies to make their decision, based on the past record. The analysing power is limited and can adapt when necessary. Interesting cooperation and competition patterns of the society seem to arise and to be responsive to the payoff function.",
"",
"We introduce a minimal Agent Based Model for financial markets to understand the nature and Self-Organization of the Stylized Facts. The model is minimal in the sense that we try to identify the essential ingredients to reproduce the main most important deviations of price time series from a Random Walk behavior. We focus on four essential ingredients: fundamentalist agents which tend to stabilize the market; chartist agents which induce destabilization; analysis of price behavior for the two strategies; herding behavior which governs the possibility of changing strategy. Bubbles and crashes correspond to situations dominated by chartists, while fundamentalists provide a long time stability (on average). The Stylized Facts are shown to correspond to an intermittent behavior which occurs only for a finite value of the number of agents N. Therefore they correspond to finite size effect which, however, can occur at different time scales. We propose a new mechanism for the Self-Organization of this state which is linked to the existence of a threshold for the agents to be active or not active. The feedback between price fluctuations and number of active agents represent a crucial element for this state of Self-Organized-Intermittency. The model can be easily generalized to consider more realistic variants.",
"",
"We extend Kirman’s model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman’s agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics."
]
}
|
1202.3533
|
2096394614
|
We present examples of agent-based and stochastic models of competition and business processes in economics and finance. We start from as simple as possible models, which have microscopic, agent-based, versions and macroscopic treatment in behavior. Microscopic and macroscopic versions of herding model proposed by Kirman and Bass diffusion of new products are considered in this contribution as two basic ideas. Further we demonstrate that general herding behavior can be considered as a background of nonlinear stochastic model of financial fluctuations.
|
Our work in the modeling of complex social and economic systems has begun from the applications of nonlinear stochastic differential equations (abbr. SDE) seeking reproduce statistics of financial market data. The proposed class of equations has power law statistics evidently very similar to the ones observed in the empirical data. As all of this work (for broad review see @cite_12 ) was done by relying on the macroscopic phenomenological reasoning, we are now motivated to find the microscopic reasoning for the proposed equations. The development of the macroscopic treatments for the well established agent-based models appears to be the most consistent approach, as the movement in the opposite direction seems to be very complex and ambiguitious task. Thus we decided that we should select the simple agent-based models, which would have an expected macroscopic description. In this contribution we present a few examples of the agent-based modeling, based on the Kirman's model, in the business and finance while showing that the examples have useful and informative macroscopic treatments.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"1559099513"
],
"abstract": [
"Volatility clustering, evaluated through slowly decaying auto-correlations, Hurst effect or 1 f noise for absolute returns, is a characteristic property ofmost financial assets return time series (1999). Statistical analysis alone is not able to provide a definite answer for the presence or absence of long-range dependence phenomenon in stock returns or volatility, unless economic mechanisms are proposed to understand the origin of such phenomenon Cont (2005); (1999). Whether results of statistical analysis correspond to longrange dependence is a difficult question and subject to an ongoing statistical debate Cont (2005). Extensive empirical analysis of the financial market data, supporting the idea that the longrange volatility correlations arise from trading activity, provides valuable background for further development of the long-ranged memory stochastic models (2003); (2001). The power-law behavior of the auto-regressive conditional duration process Sato (2004) based on the random multiplicative process and it’s special case the self-modulation process Takayasu (2003), exhibiting 1 f fluctuations, supported the idea of stochastic modeling with a power-law probability density function (PDF) and long-range memory. Thus the agent based economic models Kirman & Teyssiere (2002); Lux & Marchesi (2000) as well as the stochastic models Borland (2004); (2008; 2010); Queiros (2007) exhibiting long-range dependence phenomenon in volatility or trading volume are of great interest and remain an active topic of research. Properties of stochastic multiplicative point processes have been investigated analytically and numerically and the formula for the power spectrum has been derived Gontis & Kaulakys (2004). In themore recent papers (2006); Kaulakys A Ruseckas & Kaulakys (2010) the general form of the multiplicative stochastic differential equation (SDE) was derived in agreement with the model earlier proposed in Gontis & Kaulakys (2004). Since Gontis & Kaulakys (2004) a model of trading activity, based on a SDE driven Poisson-like process, was presented (2008) and in the most recent paper (2010) we proposed a double stochastic model, whose return time series yield two power-law statistics, i.e., the PDF and the power spectral density (PSD) of absolute return, mimicking the empirical data for the one-minute trading return in the NYSE. In this chapter we present theoretical arguments and empirical evidence for the non-linear double stochastic model of return in financial markets. With empirical data from NYSE and Vilnius Stock Exchange (VSE) demonstrating universal scaling of return statistical properties, 27"
]
}
|
1202.3533
|
2096394614
|
We present examples of agent-based and stochastic models of competition and business processes in economics and finance. We start from as simple as possible models, which have microscopic, agent-based, versions and macroscopic treatment in behavior. Microscopic and macroscopic versions of herding model proposed by Kirman and Bass diffusion of new products are considered in this contribution as two basic ideas. Further we demonstrate that general herding behavior can be considered as a background of nonlinear stochastic model of financial fluctuations.
|
Another interesting problem tackled in this work is related to the dynamics of the intermittent behavior. This kind of behavior is observed in many different complex systems ranging from the geology (ex., earthquakes @cite_23 ) and astronomy (ex., sunspots @cite_40 ) to the biology (ex., neuron activity @cite_83 ) and finance @cite_68 . Great review of the universality of the bursty behavior is given by @cite_53 and Kleinberg @cite_62 . In @cite_53 the bursting behavior is considered as a point process with threshold mechanism. In this contribution we analyze the class of nonlinear SDE exhibiting power law statistics and bursting behavior, which was derived from the multiplicative point process @cite_65 @cite_33 @cite_82 with applications for the modeling of trading activity in financial markets @cite_74 @cite_66 . This provides a very general, via hitting time formalism @cite_42 @cite_37 @cite_77 , approach to the modeling of bursty behavior of trading activity and absolute return in the financial markets @cite_6 .
|
{
"cite_N": [
"@cite_37",
"@cite_62",
"@cite_33",
"@cite_53",
"@cite_42",
"@cite_65",
"@cite_6",
"@cite_77",
"@cite_40",
"@cite_83",
"@cite_23",
"@cite_74",
"@cite_68",
"@cite_66",
"@cite_82"
],
"mid": [
"1484033701",
"1521478692",
"",
"2089803230",
"1588690085",
"2169558534",
"2234032764",
"",
"",
"2088503649",
"2037292155",
"1965313010",
"2100011707",
"2000265426",
"1985146307"
],
"abstract": [
"",
"A fundamental problem in text data mining is to extract meaningful structure from document streams that arrive continuously over time. E-mail and news articles are two natural examples of such streams, each characterized by topics that appear, grow in intensity for a period of time, and then fade away. The published literature in a particular research field can be seen to exhibit similar phenomena over a much longer time scale. Underlying much of the text mining work in this area is the following intuitive premise—that the appearance of a topic in a document stream is signaled by a “burst of activity,” with certain features rising sharply in frequency as the topic emerges. The goal of the present work is to develop a formal approach for modeling such “bursts,” in such a way that they can be robustly and efficiently identified, and can provide an organizational framework for analyzing the underlying content. The approach is based on modeling the stream using an infinite-state automaton, in which bursts appear naturally as state transitionss it can be viewed as drawing an analogy with models from queueing theory for bursty network traffic. The resulting algorithms are highly efficient, and yield a nested representation of the set of bursts that imposes a hierarchical structure on the overall stream. Experiments with e-mail and research paper archives suggest that the resulting structures have a natural meaning in terms of the content that gave rise to them.",
"",
"Inhomogeneous temporal processes, like those appearing in human communications, neuron spike trains, and seismic signals, consist of high-activity bursty intervals alternating with long low-activity periods. In recent studies such bursty behavior has been characterized by a fat-tailed inter-event time distribution, while temporal correlations were measured by the autocorrelation function. However, these characteristic functions are not capable to fully characterize temporally correlated heterogenous behavior. Here we show that the distribution of the number of events in a bursty period serves as a good indicator of the dependencies, leading to the universal observation of power-law distribution for a broad class of phenomena. We find that the correlations in these quite different systems can be commonly interpreted by memory effects and described by a simple phenomenological model, which displays temporal behavior qualitatively similar to that in real systems.",
"Stochastic processes of common use in mathematical finance are presented throughout this book, which consists of eleven chapters, interlacing on the one hand financial concepts and instruments, such as arbitrage opportunities, admissible strategies, contingent claims, option pricing, default risk, ruin, and on the other hand, Brownian motion, diffusion processes, Levy processes, together with the basic properties of these processes. The first half of the book is devoted to continuous path processes whereas the second half deals with discontinuous processes. Only basic knowledge of probability theory is assumed; the book is organized so that the mathematical facts pertaining to a given financial question are gathered close to the study of that question.",
"Signals consisting of a sequence of pulses show that inherent origin of the 1 f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S(f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S(f)∼1 fβ for various values of β, including β=12, 1 and 32. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.",
"",
"",
"",
"To assess sympathetic variability in chronic heart failure (CHF), we evaluated a distribution of inter-spike intervals (ISIs) in renal sympathetic nerve activity (RSNA) in salt-sensitive hypertension-induced CHF (DSSH-CHF) rats. Dahl salt-sensitive rats were fed an 8 NaCl diet for 9 weeks to induce salt-sensitive hypertension-induced CHF. ISIs in RSNA were obtained from chronically instrumented conscious rats, and counts (frequency) and ranks of ISIs in RSNA were plotted with a histogram. We found that ISIs in RSNA followed a power-law distribution in rats, and the power-law distribution of ISIs for RSNA in DSSH-CHF rats was significantly different from that in normal rats. These results indicated that sympathetic variability may be significantly different between salt-sensitive hypertension-induced CHF and healthy individuals, which suggests that sympathetic variability may be used to predict abnormality of the sympathetic regulatory system.",
"Analyzing diverse seismic catalogs, we have determined that the probability densities of the earthquake recurrence times for different spatial areas and magnitude ranges can be described by a unique universal distribution if the time is rescaled with the rate of seismic occurrence, which therefore fully governs seismicity. The shape of the distribution shows the existence of clustering beyond the duration of aftershock bursts, and scaling reveals the self-similarity of the clustering structure in the space-time-magnitude domain. This holds from worldwide to local scales, for quite different tectonic environments and for all the magnitude ranges considered.",
"Earlier we proposed the stochastic point process model, which reproduces a variety of self-affine time series exhibiting power spectral density S(f) scaling as power of the frequency f and derived a stochastic differential equation with the same long range memory properties. Here we present a stochastic differential equation as a dynamical model of the observed memory in the financial time series. The continuous stochastic process reproduces the statistical properties of the trading activity and serves as a background model for the modeling waiting time, return and volatility. Empirically observed statistical properties: exponents of the power-law probability distributions and power spectral density of the long-range memory financial variables are reproduced with the same values of few model parameters.",
"We present a set of stylized empirical facts emerging from the statistical analysis of price variations in various types of financial markets. We first discuss some general issues common to all statistical studies of financial time series. Various statistical properties of asset returns are then described: distributional properties, tail properties and extreme fluctuations, pathwise regularity, linear and nonlinear dependence of returns in time and across stocks. Our description emphasizes properties common to a wide variety of markets and instruments. We then show how these statistical properties invalidate many of the common statistical approaches used to study financial data sets and examine some of the statistical problems encountered in each case.",
"We propose the point process model as the Poissonian-like stochastic sequence with slowly diffusing mean rate and adjust the parameters of the model to the empirical data of trading activity for 26 stocks traded on NYSE. The proposed scaled stochastic differential equation provides the universal description of the trading activities with the same parameters applicable for all stocks.",
"We provide evidence that for some values of the parameters a simple agent-based model, describing herding behavior, yields signals with 1 f power spectral density. We derive a non-linear stochastic differential equation for the ratio of number of agents and show, that it has the form proposed earlier for modeling of 1 fnoise with different exponents �. The non- linear terms in the transition probabilities, quantifying the herding behavior, are crucial to the appearance of 1 f noise. Thus, the herding dynamics can be seen as a microscopic explanation of the proposed non-linear stochastic differential equations generatingsignals with 1 fspectrum. Wealsoconsiderthepossiblefeedbackofmacroscopicstateonmicroscopictransitionprobabilities strengthening the non-linearity of equations and providing more opportunities in themodeling of processes exhibiting power-law statistics. Copyright c �EPLA, 2011"
]
}
|
1202.3461
|
95171454
|
Sharing real-time aggregate statistics of private data is of great value to the public to perform data mining for understanding important phenomena, such as Influenza outbreaks and traffic congestion. However, releasing time-series data with standard differential privacy mechanism has limited utility due to high correlation between data values. We propose FAST, a novel framework to release real-time aggregate statistics under differential privacy based on filtering and adaptive sampling. To minimize the overall privacy cost, FAST adaptively samples long time-series according to the detected data dynamics. To improve the accuracy of data release per time stamp, FAST predicts data values at non-sampling points and corrects noisy observations at sampling points. Our experiments with real-world as well as synthetic data sets confirm that FAST improves the accuracy of released aggregates even under small privacy cost and can be used to enable a wide range of monitoring applications.
|
Time series data is pervasively encountered in the fields of engineering, science, sociology, and economics. Various techniques @cite_20 , such as ARIMA modeling, exponential smoothing, ARAR, and Holt-Winters methods, have been studied for time-series forecasting. @cite_38 studied the trade-offs between time-series compressibility property and perturbation. They proposed two algorithms based on Fast Fourier Transform (FFT) and Discrete Wavelet Transform (DWT) respectively to perturb time-series frequencies. But the additive noise proposed by them does not guarantee differential privacy, meaning it does not protect sensitive information from adversaries with strong background knowledge. Rastogi and Nath @cite_14 proposed a Discrete Fourier Transform (DFT) based algorithm which implements differential privacy by perturbing the discrete Fourier coefficients. However, this algorithm cannot produce real-time private released in a streaming environment. The recent works @cite_13 @cite_17 on continuous data streams defined the privacy to protect an event, i.e. one user's presence at a particular time point, rather than the presence of that user. If one user contributes to the aggregation at time point @math , @math , and @math , the event-level privacy hides the user's presence at only one of the three time points, resulting the rest two open to attack.
|
{
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2143147232",
"2104803737",
"2012992615",
"2798056406",
""
],
"abstract": [
"In this paper we study the trade-offs between time series compressibility and partial information hiding and their fundamental implications on how we should introduce uncertainty about individual values by perturbing them. More specifically, if the perturbation does not have the same compressibility properties as the original data, then it can be detected and filtered out, reducing uncertainty. Thus, by making the perturbation \"similar\" to the original data, we can both preserve the structure of the data better, while simultaneously making breaches harder. However, as data become more compressible, a fraction of the uncertainty can be removed if true values are leaked, revealing how they were perturbed. We formalize these notions, study the above trade-offs on real data and develop practical schemes which strike a good balance and can also be extended for on-the-fly data hiding in a streaming environment.",
"We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data. To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of Θ(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from Θ(n) to roughly Θ(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users.",
"We ask the question: how can Web sites and data aggregators continually release updated statistics, and meanwhile preserve each individual user’s privacy? Suppose we are given a stream of 0’s and 1’s. We propose a differentially private continual counter that outputs at every time step the approximate number of 1’s seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow Web sites to continually give top-k and hot items suggestions while preserving users’ privacy.",
"Preface 1 INTRODUCTION 1.1 Examples of Time Series 1.2 Objectives of Time Series Analysis 1.3 Some Simple Time Series Models 1.3.3 A General Approach to Time Series Modelling 1.4 Stationary Models and the Autocorrelation Function 1.4.1 The Sample Autocorrelation Function 1.4.2 A Model for the Lake Huron Data 1.5 Estimation and Elimination of Trend and Seasonal Components 1.5.1 Estimation and Elimination of Trend in the Absence of Seasonality 1.5.2 Estimation and Elimination of Both Trend and Seasonality 1.6 Testing the Estimated Noise Sequence 1.7 Problems 2 STATIONARY PROCESSES 2.1 Basic Properties 2.2 Linear Processes 2.3 Introduction to ARMA Processes 2.4 Properties of the Sample Mean and Autocorrelation Function 2.4.2 Estimation of @math and @math 2.5 Forecasting Stationary Time Series 2.5.3 Prediction of a Stationary Process in Terms of Infinitely Many Past Values 2.6 The Wold Decomposition 1.7 Problems 3 ARMA MODELS 3.1 ARMA( @math ) Processes 3.2 The ACF and PACF of an ARMA @math Process 3.2.1 Calculation of the ACVF 3.2.2 The Autocorrelation Function 3.2.3 The Partial Autocorrelation Function 3.3 Forecasting ARMA Processes 1.7 Problems 4 SPECTRAL ANALYSIS 4.1 Spectral Densities 4.2 The Periodogram 4.3 Time-Invariant Linear Filters 4.4 The Spectral Density of an ARMA Process 1.7 Problems 5 MODELLING AND PREDICTION WITH ARMA PROCESSES 5.1 Preliminary Estimation 5.1.1 Yule-Walker Estimation 5.1.3 The Innovations Algorithm 5.1.4 The Hannan-Rissanen Algorithm 5.2 Maximum Likelihood Estimation 5.3 Diagnostic Checking 5.3.1 The Graph of $ =1, ,n 5.3.2 The Sample ACF of the Residuals",
""
]
}
|
1202.3987
|
2951313964
|
Malware spread among websites and between websites and clients is an increasing problem. Search engines play an important role in directing users to websites and are a natural control point for intervening, using mechanisms such as blacklisting. The paper presents a simple Markov model of malware spread through large populations of websites and studies the effect of two interventions that might be deployed by a search provider: blacklisting infected web pages by removing them from search results entirely and a generalization of blacklisting, called depreferencing, in which a website's ranking is decreased by a fixed percentage each time period the site remains infected. We analyze and study the trade-offs between infection exposure and traffic loss due to false positives (the cost to a website that is incorrectly blacklisted) for different interventions. As expected, we find that interventions are most effective when websites are slow to remove infections. Surprisingly, we also find that low infection or recovery rates can increase traffic loss due to false positives. Our analysis also shows that heavy-tailed distributions of website popularity, as documented in many studies, leads to high sample variance of all measured outcomes. These result implies that it will be difficult to determine empirically whether certain website interventions are effective, and it suggests that theoretical models such as the one described in this paper have an important role to play in improving web security.
|
There are many approaches to combating web-based malware, including the use of virtual machines or kernel extensions to check for suspicious changes to the operating system @cite_12 @cite_23 @cite_4 @cite_1 , emulating browsers to detect malicious JavaScript @cite_5 @cite_17 , and detecting campaigns that promote compromised sites to the top of search results @cite_9 . No technique is completely effective at disrupting web-based malware, according to a study of Google's data over more than four years @cite_22 . In our view, one limiting factor is the choice of conservative approaches that minimize false positives at the expense of speedy detection. For example, @cite_4 choose to minimize false positives in a system that allows explicit trade-offs between false and true positives.
|
{
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_1",
"@cite_23",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"2095610745",
"",
"1843351773",
"2051498836",
"2111165162",
"1970867218",
"",
"58852127"
],
"abstract": [
"As the web continues to play an ever increasing role in information exchange, so too is it becoming the prevailing platform for infecting vulnerable hosts. In this paper, we provide a detailed study of the pervasiveness of so-called drive-by downloads on the Internet. Drive-by downloads are caused by URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. Over a period of 10 months we processed billions of URLs, and our results shows that a non-trivial amount, of over 3 million malicious URLs, initiate drive-by downloads. An even more troubling finding is that approximately 1.3 of the incoming search queries to Google's search engine returned at least one URL labeled as malicious in the results page. We also explore several aspects of the drive-by downloads problem. Specifically, we study the relationship between the user browsing habits and exposure to malware, the techniques used to lure the user into the malware distribution networks, and the different properties of these networks.",
"",
"We perform an in-depth study of SEO attacks that spread malware by poisoning search results for popular queries. Such attacks, although recent, appear to be both widespread and effective. They compromise legitimate Web sites and generate a large number of fake pages targeting trendy keywords. We first dissect one example attack that affects over 5,000 Web domains and attracts over 81,000 user visits. Further, we develop de-SEO, a system that automatically detects these attacks. Using large datasets with hundreds of billions of URLs, deSEO successfully identifies multiple malicious SEO campaigns. In particular, applying the URL signatures derived from deSEO, we find 36 of sampled searches to Google and Bing contain at least one malicious link in the top results at the time of our experiment.",
"Web-based surreptitious malware infections (i.e., drive-by downloads) have become the primary method used to deliver malicious software onto computers across the Internet. To address this threat, we present a browser independent operating system kernel extension designed to eliminate driveby malware installations. The BLADE (Block All Drive-by download Exploits) system asserts that all executable files delivered through browser downloads must result from explicit user consent and transparently redirects every unconsented browser download into a nonexecutable secure zone of disk. BLADE thwarts the ability of browser-based exploits to surreptitiously download and execute malicious content by remapping to the file system only those browser downloads to which a programmatically inferred user-consent is correlated, BLADE provides its protection without explicit knowledge of any exploits and is thus resilient against code obfuscation and zero-day threats that directly contribute to the pervasiveness of today's drive-by malware. We present the design of our BLADE prototype implementation for the Microsoft Windows platform, and report results from as extensive empirical evaluation of its effectiveness on popular browsers. Our evaluation includes multiple versions of IE and Firefox, against 1,934 active malicious URLs, representing a broad spectrum of web-based exploits not plaguing the Internet. BLADE successfully blocked all drive-by malware install attempts with zero false positives and a 3 worst-case performance cost.",
"Internet attacks that use malicious web sites to install malware programs by exploiting browser vulnerabilities are a serious emerging threat. In response, we have developed an automated web patrol system to automatically identify and monitor these malicious sites. We describe the design and implementation of the Strider HoneyMonkey Exploit Detection System, which consists of a pipeline of “monkey programs” running possibly vulnerable browsers on virtual machines with different patch levels and patrolling the Web to seek out and classify web sites that exploit browser vulnerabilities. Within the first month of utilizing this system, we identified 752 unique URLs hosted on 288 web sites that could successfully exploit unpatched Windows XP machines. The system automatically constructed topology graphs based on traffic redirection to capture the relationship between the exploit sites. This allowed us to identify several major players who are responsible for a large number of exploit pages. By monitoring these 752 exploit-URLs on a daily basis, we discovered a malicious web site that was performing zero-day exploits of the unpatched javaprxy.dll vulnerability and was operating behind 25 exploit-URLs. It was confirmed as the first “inthe-wild”, zero-day exploit of this vulnerability that was reported to the Microsoft Security Response Center. Additionally, by scanning the most popular one million URLs as classified by a search engine, we found over seven hundred exploit-URLs, many of which serve popular content related to celebrities, song lyrics, wallpapers, video game cheats, and wrestling.",
"JavaScript is a browser scripting language that allows developers to create sophisticated client-side interfaces for web applications. However, JavaScript code is also used to carry out attacks against the user's browser and its extensions. These attacks usually result in the download of additional malware that takes complete control of the victim's platform, and are, therefore, called \"drive-by downloads.\" Unfortunately, the dynamic nature of the JavaScript language and its tight integration with the browser make it difficult to detect and block malicious JavaScript code. This paper presents a novel approach to the detection and analysis of malicious JavaScript code. Our approach combines anomaly detection with emulation to automatically identify malicious JavaScript code and to support its analysis. We developed a system that uses a number of features and machine-learning techniques to establish the characteristics of normal JavaScript code. Then, during detection, the system is able to identify anomalous JavaScript code by emulating its behavior and comparing it to the established profiles. In addition to identifying malicious code, the system is able to support the analysis of obfuscated code and to generate detection signatures for signature-based systems. The system has been made publicly available and has been used by thousands of analysts.",
"",
"JavaScript malware-based attacks account for a large fraction of successful mass-scale exploitation happening today. Attackers like JavaScript-based attacks because they can be mounted against an unsuspecting user visiting a seemingly innocent web page. While several techniques for addressing these types of exploits have been proposed, in-browser adoption has been slow, in part because of the performance overhead these methods incur. In this paper, we propose ZOZZLE, a low-overhead solution for detecting and preventing JavaScript malware that is fast enough to be deployed in the browser. Our approach uses Bayesian classification of hierarchical features of the JavaScript abstract syntax tree to identify syntax elements that are highly predictive of malware. Our experimental evaluation shows that ZOZZLE is able to detect JavaScript malware through mostly static code analysis effectively. ZOZZLE has an extremely low false positive rate of 0.0003 , which is less than one in a quarter million. Despite this high accuracy, the ZOZZLE classifier is fast, with a throughput of over one megabyte of JavaScript code per second."
]
}
|
1202.3987
|
2951313964
|
Malware spread among websites and between websites and clients is an increasing problem. Search engines play an important role in directing users to websites and are a natural control point for intervening, using mechanisms such as blacklisting. The paper presents a simple Markov model of malware spread through large populations of websites and studies the effect of two interventions that might be deployed by a search provider: blacklisting infected web pages by removing them from search results entirely and a generalization of blacklisting, called depreferencing, in which a website's ranking is decreased by a fixed percentage each time period the site remains infected. We analyze and study the trade-offs between infection exposure and traffic loss due to false positives (the cost to a website that is incorrectly blacklisted) for different interventions. As expected, we find that interventions are most effective when websites are slow to remove infections. Surprisingly, we also find that low infection or recovery rates can increase traffic loss due to false positives. Our analysis also shows that heavy-tailed distributions of website popularity, as documented in many studies, leads to high sample variance of all measured outcomes. These result implies that it will be difficult to determine empirically whether certain website interventions are effective, and it suggests that theoretical models such as the one described in this paper have an important role to play in improving web security.
|
Several studies have focused on alternative intervention strategies, which could potentially be generalized using our depreferencing method. For example, modeled responses available to ISPs @cite_19 . Other researchers have identified suitable intervention strategies based on empirical research, which might also be amenable to depreferencing. For example, @cite_15 found that criminals relied on just three payment processors to collect money from victims, which led the authors to recommend targeting the payment processors as a low-cost intervention. Similarly, @cite_26 empirically measured the effectiveness of pressuring registrars to suspend spam-advertising domain names. In a related intervention, Google has successfully pushed ad-filled sites down the results by changes to its search-ranking algorithm @cite_25 , suggesting that a similar effort to depreference malware-infected sites is technically feasible.
|
{
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_26",
"@cite_25"
],
"mid": [
"2952141813",
"2121252355",
"",
"1984936212"
],
"abstract": [
"An emerging consensus among policy makers is that interventions undertaken by Internet Service Providers are the best way to counter the rising incidence of malware. However, assessing the suitability of countermeasures at this scale is hard. In this paper, we use an agent-based model, called ASIM, to investigate the impact of policy interventions at the Autonomous System level of the Internet. For instance, we find that coordinated intervention by the 0.2 -biggest ASes is more effective than uncoordinated efforts adopted by 30 of all ASes. Furthermore, countermeasures that block malicious transit traffic appear more effective than ones that block outgoing traffic. The model allows us to quantify and compare positive externalities created by different countermeasures. Our results give an initial indication of the types and levels of intervention that are most cost-effective at large scale.",
"Spam-based advertising is a business. While it has engendered both widespread antipathy and a multi-billion dollar anti-spam industry, it continues to exist because it fuels a profitable enterprise. We lack, however, a solid understanding of this enterprise's full structure, and thus most anti-Spam interventions focus on only one facet of the overall spam value chain (e.g., spam filtering, URL blacklisting, site takedown).In this paper we present a holistic analysis that quantifies the full set of resources employed to monetize spam email -- including naming, hosting, payment and fulfillment -- usingextensive measurements of three months of diverse spam data, broad crawling of naming and hosting infrastructures, and over 100 purchases from spam-advertised sites. We relate these resources to the organizations who administer them and then use this data to characterize the relative prospects for defensive interventions at each link in the spam value chain. In particular, we provide the first strong evidence of payment bottlenecks in the spam value chain, 95 of spam-advertised pharmaceutical, replica and software products are monetized using merchant services from just a handful of banks.",
"",
"Online service providers are engaged in constant conflict with miscreants who try to siphon a portion of legitimate traffic to make illicit profits. We study the abuse of \"trending\" search terms, in which miscreants place links to malware-distributing or ad-filled web sites in web search and Twitter results, by collecting and analyzing measurements over nine months from multiple sources. We devise heuristics to identify ad-filled sites, report on the prevalence of malware and ad-filled sites in trending-term search results, and measure the success in blocking such content. We uncover collusion across offending domains using network analysis, and use regression analysis to conclude that both malware and ad-filled sites thrive on less popular, and less profitable trending terms. We build an economic model informed by our measurements and conclude that ad-filled sites and malware distribution may be economic substitutes. Finally, because our measurement interval spans February 2011, when Google announced changes to its ranking algorithm to root out low-quality sites, we can assess the impact of search-engine intervention on the profits miscreants can achieve."
]
}
|
1202.3683
|
2199481630
|
Infrastructure-as-a-Service (IaaS) providers need to offer richer services to be competitive while optimizing their resource usage to keep costs down. Richer service offerings include new resource request models involving bandwidth guarantees between virtual machines (VMs). Thus we consider the following problem: given a VM request graph (where nodes are VMs and edges represent virtual network connectivity between the VMs) and a real data center topology, find an allocation of VMs to servers that satisfies the bandwidth guarantees for every virtual network edge---which maps to a path in the physical network---and minimizes congestion of the network. Previous work has shown that for arbitrary networks and requests, finding the optimal embedding satisfying bandwidth requests is @math -hard. However, in most data center architectures, the routing protocols employed are based on a spanning tree of the physical network. In this paper, we prove that the problem remains @math -hard even when the physical network topology is restricted to be a tree, and the request graph topology is also restricted. We also present a dynamic programming algorithm for computing the optimal embedding in a tree network which runs in time @math , where @math is the number of nodes in the physical topology and @math is the size of the request graph, which is well suited for practical requests which have small @math . Such requests form a large class of web-service and enterprise workloads. Also, if we restrict the requests topology to a clique (all VMs connected to a virtual switch with uniform bandwidth requirements), we show that the dynamic programming algorithm can be modified to output the minimum congestion embedding in time @math .
|
Previous work has shown that the problem of embedding virtual request graphs in arbitrary physical networks is @math -hard @cite_22 @cite_8 . A number of heuristic approaches have been proposed including mapping VMs to nodes in the network greedily and mapping the flows between VMs to paths in the network via shortest paths and multi-commodity flow algorithms @cite_21 @cite_23 . However these approaches do not offer provable guarantees and may lead to congested networks in some circumstances. The authors of @cite_22 assume network support for path-splitting @cite_15 in order to use a multi-commodity flow based approach for mapping VMs and flows between them to the physical network, but this approach is not scalable beyond networks containing hundreds of servers @cite_8 .
|
{
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_23",
"@cite_15"
],
"mid": [
"2161965229",
"2097882016",
"2151268583",
"",
"2114298221"
],
"abstract": [
"Recently network virtualization has been proposed as a promising way to overcome the current ossification of the Internet by allowing multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. A major challenge in this respect is the VN embedding problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms which had clear separation between the node mapping and the link mapping phases. This paper proposes VN embedding algorithms with better coordination between the two phases. We formulate the VN em- bedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program, and devise two VN embedding algo- rithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. Simulation experiments show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run.",
"In this paper, we propose virtual data center (VDC) as the unit of resource allocation for multiple tenants in the cloud. VDCs are more desirable than physical data centers because the resources allocated to VDCs can be rapidly adjusted as tenants' needs change. To enable the VDC abstraction, we design a data center network virtualization architecture called SecondNet. SecondNet achieves scalability by distributing all the virtual-to-physical mapping, routing, and bandwidth reservation state in server hypervisors. Its port-switching based source routing (PSSR) further makes SecondNet applicable to arbitrary network topologies using commodity servers and switches. SecondNet introduces a centralized VDC allocation algorithm for bandwidth guaranteed virtual to physical mapping. Simulations demonstrate that our VDC allocation achieves high network utilization and low time complexity. Our implementation and experiments show that we can build SecondNet on top of various network topologies, and SecondNet provides bandwidth guarantee and elasticity, as designed.",
"The routing infrastructure of the Internet has become resistant to fundamental changes and the use of overlay networks has been proposed to provide additional flexibility and control. One of the most prominent configurable components of an overlay network is its topology, which can be dynamically reconfigured to accommodate communication requirements that vary over time. In this paper, we study the problem of determining dynamic topology reconfiguration for service overlay networks with dynamic communication requirement, and the ideal goal is to find the optimal reconfiguration policies that can minimize the potential overall cost of using an overlay. We start by observing the properties of the optimal reconfiguration policies through studies on small systems and find structures in the optimal reconfiguration policies. Based on these observations, we propose heuristic methods for constructing different flavors of reconfiguration policies, i.e., never-change policy, always-change policy and cluster-based policies, to mimic and approximate the optimal ones. Our experiments show that our policy construction methods are applicable to large systems and generate policies with good performance. Our work does not only provide solutions to practical overlay topology design problems, but also provides theoretical evidence for the advantage of overlay network due to its configurability.",
"",
"Network virtualization is a powerful way to run multiple architectures or experiments simultaneously on a shared infrastructure. However, making efficient use of the underlying resources requires effective techniques for virtual network embedding--mapping each virtual network to specific nodes and links in the substrate network. Since the general embedding problem is computationally intractable, past research restricted the problem space to allow efficient solutions, or focused on designing heuristic algorithms. In this paper, we advocate a different approach: rethinking the design of the substrate network to enable simpler embedding algorithms and more efficient use of resources, without restricting the problem space. In particular, we simplify virtual link embedding by: i) allowing the substrate network to split a virtual link over multiple substrate paths and ii) employing path migration to periodically re-optimize the utilization of the substrate network. We also explore node-mapping algorithms that are customized to common classes of virtual-network topologies. Our simulation experiments show that path splitting, path migration,and customized embedding algorithms enable a substrate network to satisfy a much larger mix of virtual networks"
]
}
|
1202.3683
|
2199481630
|
Infrastructure-as-a-Service (IaaS) providers need to offer richer services to be competitive while optimizing their resource usage to keep costs down. Richer service offerings include new resource request models involving bandwidth guarantees between virtual machines (VMs). Thus we consider the following problem: given a VM request graph (where nodes are VMs and edges represent virtual network connectivity between the VMs) and a real data center topology, find an allocation of VMs to servers that satisfies the bandwidth guarantees for every virtual network edge---which maps to a path in the physical network---and minimizes congestion of the network. Previous work has shown that for arbitrary networks and requests, finding the optimal embedding satisfying bandwidth requests is @math -hard. However, in most data center architectures, the routing protocols employed are based on a spanning tree of the physical network. In this paper, we prove that the problem remains @math -hard even when the physical network topology is restricted to be a tree, and the request graph topology is also restricted. We also present a dynamic programming algorithm for computing the optimal embedding in a tree network which runs in time @math , where @math is the number of nodes in the physical topology and @math is the size of the request graph, which is well suited for practical requests which have small @math . Such requests form a large class of web-service and enterprise workloads. Also, if we restrict the requests topology to a clique (all VMs connected to a virtual switch with uniform bandwidth requirements), we show that the dynamic programming algorithm can be modified to output the minimum congestion embedding in time @math .
|
Guo al @cite_8 proposed a new architectural framework, Secondnet, for embedding virtualization requests with bandwidth guarantees. This framework considers requests with bandwidth guarantees @math between every pair of VMs @math . This framework provides rigorous application performance guarantees and hence is suitable for enterprise workloads but at the same time also establishes hardness of the problem of finding such embeddings in arbitrary networks. Our results employ the SecondNet framework but restrict attention to tree networks.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"2097882016"
],
"abstract": [
"In this paper, we propose virtual data center (VDC) as the unit of resource allocation for multiple tenants in the cloud. VDCs are more desirable than physical data centers because the resources allocated to VDCs can be rapidly adjusted as tenants' needs change. To enable the VDC abstraction, we design a data center network virtualization architecture called SecondNet. SecondNet achieves scalability by distributing all the virtual-to-physical mapping, routing, and bandwidth reservation state in server hypervisors. Its port-switching based source routing (PSSR) further makes SecondNet applicable to arbitrary network topologies using commodity servers and switches. SecondNet introduces a centralized VDC allocation algorithm for bandwidth guaranteed virtual to physical mapping. Simulations demonstrate that our VDC allocation achieves high network utilization and low time complexity. Our implementation and experiments show that we can build SecondNet on top of various network topologies, and SecondNet provides bandwidth guarantee and elasticity, as designed."
]
}
|
1202.3993
|
1484936125
|
There are few studies that look closely at how the topology of the Internet evolves over time; most focus on snapshots taken at a particular point in time. In this paper, we investigate the evolution of the topology of the Autonomous Systems graph of the Internet, examining how eight commonly-used topological measures change from January 2002 to January 2010. We find that the distributions of most of the measures remain unchanged, except for average path length and clustering coefficient. The average path length has slowly and steadily increased since 2005 and the average clustering coefficient has steadily declined. We hypothesize that these changes are due to changes in peering policies as the Internet evolves. We also investigate a surprising feature, namely that the maximum degree has changed little, an aspect that cannot be captured without modeling link deletion. Our results suggest that evaluating models of the Internet graph by comparing steady-state generated topologies to snapshots of the real data is reasonable for many measures. However, accurately matching time-variant properties is more difficult, as we demonstrate by evaluating ten well-known models against the 2010 data.
|
There are few studies that look at changes in topological characteristics beyond the number of nodes and edges. Most of those that do focus on inter-AS relationships, for example, @cite_38 study the changes in customer-provider relationships and find that the number of providers is increasing over time. Another approach was taken by @cite_16 , in which they investigate the changing relationship over time between stub ASes and transit providers. They find that the net growth of rewirings for transit providers levels off at the end of 2005, around the same time our results show subtle changes in the Internet. @cite_25 look more closely at the evolution of peering relationships, and find that over time large content providers are relying less on Tier-1 ISPs, and more on peering with lower tiers. This finding is supported by @cite_20 , who report a rapid increase in the traffic flow over peer links over time, resulting in a less hierarchical Internet topology. These observations could potentially explain some of our results, as we discuss in sec:discussion .
|
{
"cite_N": [
"@cite_38",
"@cite_16",
"@cite_25",
"@cite_20"
],
"mid": [
"2168896238",
"2164066487",
"2155353872",
""
],
"abstract": [
"Internet connectivity at the AS level, defined in terms of pairwise logical peering relationships, is constantly evolving. This evolution is largely a response to economic, political, and technological changes that impact the way ASs conduct their business. We present a new framework for modeling this evolutionary process by identifying a set of criteria that ASs consider either in establishing a new peering relationship or in reassessing an existing relationship. The proposed framework is intended to capture key elements in the decision processes underlying the formation of these relationships. We present two decision processes that are executed by an AS, depending on its role in a given peering decision, as a customer or a peer of another AS. When acting as a peer, a key feature of the AS’s corresponding decision model is its reliance on realistic inter-AS traffic demands. To reflect the enormous heterogeneity among customer or peer ASs, our decision models are flexible enough to accommodate a wide range of AS-specific objectives. We demonstrate the potential of this new framework by considering different decision models in various realistic “what if” experiment scenarios. We implement these decision models to generate and study the evolution of the resulting AS graphs over time, and compare them against observed historical evolutionary features of the Internet at the AS level.",
"Characterizing the evolution of Internet topology is important to our understanding of the Internet architecture and its interplay with technical, economic and social forces. A major challenge in obtaining empirical data on topology evolution is to identify real topology changes from the observed topology changes, since the latter can be due to either topology changes or transient routing dynamics. In this paper, we formulate the topology liveness problem and propose a solution based on the analysis of BGP data. We find that the impact of transient routing dynamics on topology observation decreases exponentially over time, and that the real topology dynamics consist of a constant-rate birth process and a constant-rate death process. Our model enables us to infer real topology changes from observation data with a given confidence level. We demonstrate the usefulness of the model by applying it to three applications: providing more accurate views of the topology, evaluating theoretical evolution models, and empirically characterizing the trends of topology evolution. We find that customer networks and provider networks have distinct evolution trends, which can provide an important input to the design of future Internet routing architecture.",
"We show that the Internet topology at the autonomous system (AS) level has a rich-club phenomenon. The rich nodes, which are a small number of nodes with large numbers of links, are very well connected to each other. The rich-club is a core tier that we measured using the rich-club connectivity and the node-node link distribution. We obtained this core tier without any heuristic assumption between the ASs. The rich-club phenomenon is a simple qualitative way to differentiate between power law topologies and provides a criterion for new network models. To show this, we compared the measured rich-club of the AS graph with networks obtained using the Baraba spl acute si-Albert (BA) scale-free network model, the Fitness BA model and the Inet-3.0 model.",
""
]
}
|
1202.3993
|
1484936125
|
There are few studies that look closely at how the topology of the Internet evolves over time; most focus on snapshots taken at a particular point in time. In this paper, we investigate the evolution of the topology of the Autonomous Systems graph of the Internet, examining how eight commonly-used topological measures change from January 2002 to January 2010. We find that the distributions of most of the measures remain unchanged, except for average path length and clustering coefficient. The average path length has slowly and steadily increased since 2005 and the average clustering coefficient has steadily declined. We hypothesize that these changes are due to changes in peering policies as the Internet evolves. We also investigate a surprising feature, namely that the maximum degree has changed little, an aspect that cannot be captured without modeling link deletion. Our results suggest that evaluating models of the Internet graph by comparing steady-state generated topologies to snapshots of the real data is reasonable for many measures. However, accurately matching time-variant properties is more difficult, as we demonstrate by evaluating ten well-known models against the 2010 data.
|
In addition to studying business relationships, Dhamdhere el al @cite_11 reported on changes in average degree and average path length over time. Their results on path length agree with ours, although their study included only data up to 2007, so the trends are less clear. Because the degree distribution does not change, it is likely that the shift they see in average degree is a result of a steadily increasing sample size. Another study @cite_26 used spectral analysis to investigate clustering on the AS graph, and study coreness and changing path diversity. This analysis, however, covers short time spans (at most two and a half years), and only considers data before 2004.
|
{
"cite_N": [
"@cite_26",
"@cite_11"
],
"mid": [
"2573526173",
"2144089718"
],
"abstract": [
"In this paper we investigate to what extent the information provided by routing tables about the graph of the Autonomous Systems (ASes) can be used to understand dynamic phenomena occurring in the network. First, we classify the time scales at which such an analysis can be performed and, consequently, the kinds of phenomena that could be anticipated. Second, we improve cutting-edge technologies used to analyze the structure of the network, most notably spectral methods for graph clustering, in order to be able to analyze a whole sequence of consecutive snapshots that capture the temporal evolution of the network. Finally, we use such tools to analyze the data collected by the Oregon RouteViews project [20] during the last few years. We confirm stable properties of the AS graph, find major trends and notice that events occurring on a smaller time-frame, like worm-attacks, misconfigurations, outages, DDoS attacks, etc. seem to have a very diverse degree of impact on the AS graph structure, which suggests that these techniques could be used to distinguish some of them.",
"Our goal is to understand the evolution of the Autonomous System (AS) ecosystem over the last decade. Instead of focusing on abstract topological properties, we classify ASes into a number of \"species\" depending on their function and business type. Further, we consider the semantics of inter-AS links, in terms of customer-provider versus peering relations. We find that the available historic datasets from RouteViews and RIPE are not sufficient to infer the evolution of peering links, and so we restrict our focus to customer-provider links. Our findings highlight some important trends in the evolution of the Internet over the last decade, and hint at what the Internet is heading towards. After an exponential increase phase until 2001, the Internet now grows linearly in terms of both ASes and inter-AS links. The growth is mostly due to enterprise networks and content access providers at the periphery of the Internet. The average path length remains almost constant mostly due to the increasing multihoming degree of transit and content access providers. In recent years, enterprise networks prefer to connect to small transit providers, while content access providers connect equally to both large and small transit providers. The AS species differ significantly from each other with respect to their rewiring activity; content access providers are the most active. A few large transit providers act as \"attractors\" or \"repellers\" of customers. For many providers, strong attractiveness precedes strong repulsiveness by 3-9 months. Finally, in terms of regional growth, we find that the AS ecosystem is now larger and more dynamic in Europe than in North America."
]
}
|
1202.3993
|
1484936125
|
There are few studies that look closely at how the topology of the Internet evolves over time; most focus on snapshots taken at a particular point in time. In this paper, we investigate the evolution of the topology of the Autonomous Systems graph of the Internet, examining how eight commonly-used topological measures change from January 2002 to January 2010. We find that the distributions of most of the measures remain unchanged, except for average path length and clustering coefficient. The average path length has slowly and steadily increased since 2005 and the average clustering coefficient has steadily declined. We hypothesize that these changes are due to changes in peering policies as the Internet evolves. We also investigate a surprising feature, namely that the maximum degree has changed little, an aspect that cannot be captured without modeling link deletion. Our results suggest that evaluating models of the Internet graph by comparing steady-state generated topologies to snapshots of the real data is reasonable for many measures. However, accurately matching time-variant properties is more difficult, as we demonstrate by evaluating ten well-known models against the 2010 data.
|
The work of @cite_4 is perhaps closest to ours. They study changes in several topological measures over the time period from 2001 to 2006. Because of this time period, their results do not capture the trends we report post-2005. However, the changes they document agree with what we observed in the earlier period: They find the assortativity and k-cores are stable over time and from 2004 2005 onwards, the k-max value changes little. Further, they find the average clustering coefficient starts declining around 2005, and the average path length starts increasing gradually.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2101888944"
],
"abstract": [
"In this paper, we empirically study the evolution of large scale Internet topology at the autonomous system (AS) level. The network size grows in an exponential form, obeying the famous Moore's law. We theoretically predict that the size of the AS-level Internet will double every 5.32 years. We apply the k-core decomposition method on the real Internet, and find that the size of a k-core with larger k is nearly stable over time. In addition, the maximal coreness is very stable after 2003. In contrast to the predictions of most previous models, the maximal degree of the Internet is also relatively stable versus time. We use the edge-exchange operation to obtain the randomized networks with the same degree sequence. A systematical comparison is drawn, indicating that the real Internet is more loosely connected, and both the full Internet and the nucleus are more disassortative than their randomized versions."
]
}
|
1202.3504
|
1879918240
|
In this paper, we demonstrate the possibility of predicting people’s hometowns by using their geotagged photos posted on Flickr website. We employ Kruskal’s algorithm to cluster photos taken by a user and predict the user’s hometown. Our results prove that using social profiles of photographers allows researchers to predict the locations of their taken photos with higher accuracies. This in return can improve the previous methods which were purely based on visual features of photos [1].
|
have already studied Flickr photos from an image processing point of view. They have proposed an image recognition algorithm which tries to predict the location of a photo by looking at the photo's visual feature @cite_3 . have enhanced Hayes work by not only considering visual features of photos but also by taking into account the truncated Levy flight models of human mobility @cite_5 . Kleinberg @cite_0 and Nowell @cite_4 have independently shown that the probability of friendship for a given pair of users such as @math drops as the geographical distance between them (e.g. @math ) increases. Motivated by their work, have proposed an algorithm for predicting a person's hometown by only having information about their friends' hometowns @cite_1 .
|
{
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_5"
],
"mid": [
"2162450625",
"2168346693",
"2103163130",
"2128678576",
"2537480791"
],
"abstract": [
"We live in a “small world,” where two arbitrary people are likely connected by a short chain of intermediate friends. With scant information about a target individual, people can successively forward a message along such a chain. Experimental studies have verified this property in real social networks, and theoretical models have been advanced to explain it. However, existing theoretical models have not been shown to capture behavior in real-world social networks. Here, we introduce a richer model relating geography and social-network friendship, in which the probability of befriending a particular person is inversely proportional to the number of closer people. In a large social network, we show that one-third of the friendships are independent of geography and the remainder exhibit the proposed relationship. Further, we prove analytically that short chains can be discovered in every network exhibiting the relationship.",
"Geography and social relationships are inextricably intertwined; the people we interact with on a daily basis almost always live near us. As people spend more time online, data regarding these two dimensions -- geography and social relationships -- are becoming increasingly precise, allowing us to build reliable models to describe their interaction. These models have important implications in the design of location-based services, security intrusion detection, and social media supporting local communities. Using user-supplied address data and the network of associations between members of the Facebook social network, we can directly observe and measure the relationship between geography and friendship. Using these measurements, we introduce an algorithm that predicts the location of an individual from a sparse set of located users with performance that exceeds IP-based geolocation. This algorithm is efficient and scalable, and could be run on a network containing hundreds of millions of users.",
"Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.",
"Long a matter of folklore, the small-world phenomenon'''' --the principle that we are all linked by short chains of acquaintances --was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960''s. This work was among the first to make the phenomenon quantitative, allowing people to speak of the six degrees of separation'''' between any two people in the United States. Since then, a number of network models have been proposed as frameworks in which to study the problem analytically. One of the most refined of these models was formulated in recent work of Watts and Strogatz; their framework provided compelling evidence that the small-world phenomenon is pervasive in a range of networks arising in nature and technology, and a fundamental ingredient in the evolution of the World Wide Web. But existing models are insufficient to explain the striking algorithmic component of Milgram''s original findings: that individuals using local information are collectively very effective at actually constructing short paths between two points in a social network. Although recently proposed network models are rich in short paths, we prove that no decentralized algorithm, operating with local information only, can construct short paths in these networks with non-negligible probability. We then define an infinite family of network models that naturally generalizes the Watts-Strogatz model, and show that for one of these models, there is a decentralized algorithm capable of finding short paths with high probability. More generally, we provide a strong characterization of this family of network models, showing that there is in fact a unique model within the family for which decentralized algorithms are effective.",
"This paper presents a method for estimating geographic location for sequences of time-stamped photographs. A prior distribution over travel describes the likelihood of traveling from one location to another during a given time interval. This distribution is based on a training database of 6 million photographs from Flickr.com. An image likelihood for each location is defined by matching a test photograph against the training database. Inferring location for images in a test sequence is then performed using the Forward-Backward algorithm, and the model can be adapted to individual users as well. Using temporal constraints allows our method to geolocate images without recognizable landmarks, and images with no geographic cues whatsoever. This method achieves a substantial performance improvement over the best-available baseline, and geolocates some users' images with near-perfect accuracy."
]
}
|
1202.2287
|
2150089998
|
Is there any Cartesian-closed category of continuous domains that would be closed under Jones and Plotkin's probabilistic powerdomain construction? This is a major open problem in the area of denotational semantics of probabilistic higher-order languages. We relax the question, and look for quasi-continuous dcpos instead. We introduce a natural class of such quasi-continuous dcpos, the omega-QRB-domains. We show that they form a category omega-QRB with pleasing properties: omega-QRB is closed under the probabilistic powerdomain functor, under finite products, under taking bilimits of expanding sequences, under retracts, and even under so-called quasi-retracts. But... omega-QRB is not Cartesian closed. We conclude by showing that the QRB domains are just one half of an FS-domain, merely lacking control.
|
Instead of solving the Jung-Tix problem, one may try to circumvent it. One of the most successful such attempts led to the discovery of qcb-spaces @cite_16 and to compactly generated countably-based monotone convergence spaces @cite_15 , as Cartesian-closed categories of topological spaces where a reasonable amount of semantics can be done. This provides exciting new perspectives. The category of qcb-spaces accommodates two probabilistic powerdomains @cite_17 . The observationally induced one is essentially @math (with the weak topology), but differs from the one obtained as a free algebra.
|
{
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_17"
],
"mid": [
"2149175915",
"2077809881",
"1529108038"
],
"abstract": [
"We propose compactly generated monotone convergence spaces as a well-behaved topological generalisation of directed-complete partial orders (dcpos). The category of such spaces enjoys the usual properties of categories of ‘predomains’ in denotational semantics. Moreover, such properties are retained if one restricts to spaces with a countable pseudobase in the sense of E. Michael, a fact that permits connections to be made with computability theory, realizability semantics and recent work on the closure properties of topological quotients of countably based spaces (qcb spaces). We compare the standard domain-theoretic constructions of products and function spaces on dcpos with their compactly generated counterparts, showing that these agree in important cases, though not in general.",
"We motivate and define a category of topological domains, whose objects are certain topological spaces, generalising the usual @w-continuous dcppos of domain theory. Our category supports all the standard constructions of domain theory, including the solution of recursive domain equations. It also supports the construction of free algebras for (in)equational theories, can be used as the basis for a theory of computability, and provides a model of parametric polymorphism.",
"We present two probabilistic powerdomain constructions in topological domain theory. The first is given by a free ”convex space” construction, fitting into the theory of modelling computational effects via free algebras for equational theories, as proposed by Plotkin and Power. The second is given by an observationally induced approach, following Schroder and Simpson. We show the two constructions coincide when restricted to ω-continuous dcppos, in which case they yield the space of (continuous) probability valuations equipped with the Scott topology. Thus either construction generalises the classical domain-theoretic probabilistic powerdomain. On more general spaces, the constructions differ, and the second seems preferable. Indeed, for countably-based spaces, we characterise the observationally induced powerdomain as the space of probability valuations with weak topology. However, we show that such a characterisation does not extend to non countablybased spaces."
]
}
|
1202.2571
|
1532235227
|
Mobile devices integrating wireless short-range communication technologies make possible new applications for spontaneous communication, interaction and collaboration. An interesting approach is to use collaboration to facilitate communication when mobile devices are not able to establish direct communication paths. Opportunistic networks, formed when mobile devices communicate with each other while users are in close proximity, can help applications still exchange data in such cases. In opportunistic networks routes are built dynamically, as each mobile device acts according to the store-carry-and-forward paradigm. Thus, contacts between mobile devices are seen as opportunities to move data towards destination. In such networks data dissemination is done using forwarding and is usually based on a publish subscribe model. Opportunistic data dissemination also raises questions concerning user privacy and incentives. Such problems are addressed dierently by various opportunistic data dissemination techniques. In this we analyze existing relevant work in the area of data dissemination in opportunistic networks. We present the categories of a proposed taxonomy that capture the capabilities of data dissemination techniques used in such networks. Moreover, we survey relevant data dissemination techniques and analyze them using the proposed taxonomy. In this we analyze existing work in the area of data dissemination in opportunistic networks. We analyze dif- ferent collaboration-based communication solutions, em- phasizing their capabilities to opportunistically dissemi- nate data. We present the advantages and disadvantages of the analyzed solutions. Furthermore, we propose the categories of a taxonomy that captures the capabilities of data dissemination techniques used in opportunistic net- works. Using the categories of the proposed taxonomy, we also present a critical analysis of four opportunistic data dissemination solutions. To our knowledge, a classication of data dissemination techniques has never been previously proposed.
|
Authors of @cite_9 previously proposed a taxonomy for analyzing forwarding techniques. It separates them between algorithms without infrastructure (designed for completely flat ad-hoc networks) and algorithms with infrastructure (in which the ad-hoc networks exploit some form of infrastructure to opportunistically forward messages). Algorithms without infrastructure can be further divided into algorithms based on dissemination (like Epidemic, MV and Networking Coding), that are forms of controlled flooding, and algorithms based on context (like CAR and MobySpace), that use knowledge of the context that nodes are operating in to identify the best next hop at each forwarding step. Algorithms that exploit a form of infrastructure can also be divided into fixed infrastructure and mobile infrastructure algorithms. These algorithms have special nodes that are more powerful than the normal nodes. In case of fixed infrastructure algorithms (like Infostations and SWIM), special nodes are located at specific geographical points, whereas special nodes proposed in mobile infrastructure algorithms (like Ferries and DataMULEs) move around in the network randomly or follow predefined paths.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2032094375"
],
"abstract": [
"Opportunistic networks are one of the most interesting evolutions of MANETs. In opportunistic networks, mobile nodes are enabled to communicate with each other even if a route connecting them never exists. Furthermore, nodes are not supposed to possess or acquire any knowledge about the network topology, which (instead) is necessary in traditional MANET routing protocols. Routes are built dynamically, while messages are en route between the sender and the destination(s), and any possible node can opportunistically be used as next hop, provided it is likely to bring the message closer to the final destination. These requirements make opportunistic networks a challenging and promising research field. In this article we survey the most interesting case studies related to opportunistic networking and discuss and organize a taxonomy for the main routing and forwarding approaches in this challenging environment. We finally envision further possible scenarios to make opportunistic networks part of the next-generation Internet"
]
}
|
1202.2571
|
1532235227
|
Mobile devices integrating wireless short-range communication technologies make possible new applications for spontaneous communication, interaction and collaboration. An interesting approach is to use collaboration to facilitate communication when mobile devices are not able to establish direct communication paths. Opportunistic networks, formed when mobile devices communicate with each other while users are in close proximity, can help applications still exchange data in such cases. In opportunistic networks routes are built dynamically, as each mobile device acts according to the store-carry-and-forward paradigm. Thus, contacts between mobile devices are seen as opportunities to move data towards destination. In such networks data dissemination is done using forwarding and is usually based on a publish subscribe model. Opportunistic data dissemination also raises questions concerning user privacy and incentives. Such problems are addressed dierently by various opportunistic data dissemination techniques. In this we analyze existing relevant work in the area of data dissemination in opportunistic networks. We present the categories of a proposed taxonomy that capture the capabilities of data dissemination techniques used in such networks. Moreover, we survey relevant data dissemination techniques and analyze them using the proposed taxonomy. In this we analyze existing work in the area of data dissemination in opportunistic networks. We analyze dif- ferent collaboration-based communication solutions, em- phasizing their capabilities to opportunistically dissemi- nate data. We present the advantages and disadvantages of the analyzed solutions. Furthermore, we propose the categories of a taxonomy that captures the capabilities of data dissemination techniques used in opportunistic net- works. Using the categories of the proposed taxonomy, we also present a critical analysis of four opportunistic data dissemination solutions. To our knowledge, a classication of data dissemination techniques has never been previously proposed.
|
An alternative taxonomy, presented in @cite_8 , separates the forwarding methods according to their knowledge about context. Accordingly, there are three types of dissemination approaches: context-oblivious, partially context-aware and fully context-aware. The context-oblivious protocols do not exploit any contextual information about the behavior of users. The partially context-aware protocols exploit context information, but assume a specific model for this context. When the environment matches the assumed context, they perform very well, but their operation may not be correct if the environment is different from the assumption. Fully context-aware protocols learn and exploit the context around them and, while they may not be as efficient as partially context-aware protocols, they are much more adaptive. Some of the most popular forwarding algorithms nowadays are Bubble Rap, Propicman and HIBOp.
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"1658235865"
],
"abstract": [
"The opportunistic networking idea stems from the critical review of the research field on Mobile Ad hoc Networks (MANET). After more than ten years of research in the MANET field, this promising technology still has not massively entered the mass market. One of the main reasons of this is nowadays seen in the lack of a practical approach to the design of infrastructure-less multi-hop ad hoc networks [186, 185]. One of the main approaches of conventional MANET research is to design protocols that mask the features of mobile networks via the routing (and transport) layer, so as to expose to higher layers an Internet-like network abstraction. Wireless networks’ peculiarities, such as mobility of users, disconnection of nodes, network partitions, links’ instability, are seen—as in the legacy Internet—as exceptions. This often results in the design of MANET network stacks that are significantly complex and unstable [107]."
]
}
|
1202.2571
|
1532235227
|
Mobile devices integrating wireless short-range communication technologies make possible new applications for spontaneous communication, interaction and collaboration. An interesting approach is to use collaboration to facilitate communication when mobile devices are not able to establish direct communication paths. Opportunistic networks, formed when mobile devices communicate with each other while users are in close proximity, can help applications still exchange data in such cases. In opportunistic networks routes are built dynamically, as each mobile device acts according to the store-carry-and-forward paradigm. Thus, contacts between mobile devices are seen as opportunities to move data towards destination. In such networks data dissemination is done using forwarding and is usually based on a publish subscribe model. Opportunistic data dissemination also raises questions concerning user privacy and incentives. Such problems are addressed dierently by various opportunistic data dissemination techniques. In this we analyze existing relevant work in the area of data dissemination in opportunistic networks. We present the categories of a proposed taxonomy that capture the capabilities of data dissemination techniques used in such networks. Moreover, we survey relevant data dissemination techniques and analyze them using the proposed taxonomy. In this we analyze existing work in the area of data dissemination in opportunistic networks. We analyze dif- ferent collaboration-based communication solutions, em- phasizing their capabilities to opportunistically dissemi- nate data. We present the advantages and disadvantages of the analyzed solutions. Furthermore, we propose the categories of a taxonomy that captures the capabilities of data dissemination techniques used in opportunistic net- works. Using the categories of the proposed taxonomy, we also present a critical analysis of four opportunistic data dissemination solutions. To our knowledge, a classication of data dissemination techniques has never been previously proposed.
|
A thorough analysis of opportunistic networking is presented in @cite_2 . The authors present details regarding the architecture of Haggle and give various solutions to forwarding and data dissemination techniques. Also, security is discussed in terms of opportunistic networking, along with applications such as mobile social networking, sharing of user-generated content, pervasive sensing or pervasive healthcare.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2158185739"
],
"abstract": [
"Personal computing devices, such as smart-phones and PDAs, are commonplace, bundle several wireless network interfaces, can support compute intensive tasks, and are equipped with powerful means to produce multimedia content. Thus, they provide the resources for what we envision as a human pervasive network: a network formed by user devices, suitable to convey to users rich multimedia content and services according to their interests and needs. Similar to opportunistic networks, where the communication is built on connectivity opportunities, we envisage a network above these resources that joins together features of traditional pervasive networks and opportunistic networks fostering a new computing paradigm: opportunistic computing. In this article we discuss the evolution from opportunistic networking to opportunistic computing; we survey key recent achievements in opportunistic networking, and describe the main concepts and challenges of opportunistic computing. We finally envision further possible scenarios and functionalities to make opportunistic computing a key player in the next-generation Internet."
]
}
|
1202.2571
|
1532235227
|
Mobile devices integrating wireless short-range communication technologies make possible new applications for spontaneous communication, interaction and collaboration. An interesting approach is to use collaboration to facilitate communication when mobile devices are not able to establish direct communication paths. Opportunistic networks, formed when mobile devices communicate with each other while users are in close proximity, can help applications still exchange data in such cases. In opportunistic networks routes are built dynamically, as each mobile device acts according to the store-carry-and-forward paradigm. Thus, contacts between mobile devices are seen as opportunities to move data towards destination. In such networks data dissemination is done using forwarding and is usually based on a publish subscribe model. Opportunistic data dissemination also raises questions concerning user privacy and incentives. Such problems are addressed dierently by various opportunistic data dissemination techniques. In this we analyze existing relevant work in the area of data dissemination in opportunistic networks. We present the categories of a proposed taxonomy that capture the capabilities of data dissemination techniques used in such networks. Moreover, we survey relevant data dissemination techniques and analyze them using the proposed taxonomy. In this we analyze existing work in the area of data dissemination in opportunistic networks. We analyze dif- ferent collaboration-based communication solutions, em- phasizing their capabilities to opportunistically dissemi- nate data. We present the advantages and disadvantages of the analyzed solutions. Furthermore, we propose the categories of a taxonomy that captures the capabilities of data dissemination techniques used in opportunistic net- works. Using the categories of the proposed taxonomy, we also present a critical analysis of four opportunistic data dissemination solutions. To our knowledge, a classication of data dissemination techniques has never been previously proposed.
|
Several papers exclusively treat the problem of data dissemination in opportunistic networks. The Epidemic approach is presented in @cite_5 . In @cite_10 , a dissemination technique based on publish subscribe communication and communities is described, while @cite_11 and @cite_6 propose a wireless ad hoc podcasting system based on opportunistic networks. A multicast distribution method is presented in @cite_3 , while ContentPlace, a system that exploits dynamically learned information about users' social relationships to decide where to place data objects in order to optimize content availability, is presented in @cite_0 . These methods are further analyzed in Section 4. To compare them we apply the categories of the proposed taxonomy. The obtained analysis highlights their advantages and disadvantages and further differentiates between their capabilities.
|
{
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_10",
"@cite_11"
],
"mid": [
"2111269948",
"2045295122",
"2112263843",
"1572481965",
"1997405607",
""
],
"abstract": [
"This paper presents a publish subscribe-based multicast distribution infrastructure for DTN-based opportunistic networking environments. The distribution approach is designed to combine an effective distribution of content to interested nodes in the presence of resource constraints, mobility and unstable connectivity. By considering local resource constraints such as limited storage space and limited available bandwidth at opportunistic contacts as well as knowledge about interest for content in the network environment, nodes make local decisions about resource utilization and DTN bundle prioritization. Without further coordination, this approach uses the overall available network resources more effectively compared to other approaches such as epidemic forwarding. We have implemented this approach and have performed a series of measurements in mobile opportunistic networking scenarios under different configurations.",
"Podcasting has become a very popular and successful Internet service in a short time. This success illustrates the interest for participatory broadcasting, in its actual form however, podcasting is only available with fixed infrastructure support to retrieve publicized episodes. We aim at releasing this limitation and present herein our podcasting system architecture together with a prototype implementation based on opportunistic wireless networking that allows us to extend podcasting to ad hoc domains.",
"In this paper we present and evaluate ContentPlace, a data dissemination system for opportunistic networks, i.e., mobile networks in which stable simultaneous multi-hop paths between communication endpoints cannot be provided. We consider a scenario in which users both produce and consume data objects. ContentPlace takes care of moving and replicating data objects in the network such that interested users receive them despite possible long disconnections, partitions, etc. Thanks to ContentPlace, data producers and consumers are completely decoupled, and might be never connected to the network at the same point in time. The key feature of ContentPlace is learning and exploiting information about the social behaviour of the users to drive the data dissemination process. This allows ContentPlace to be more efficient both in terms of data delivery and in terms of resource usage with respect to reference alternative solutions. The performance of ContentPlace is thoroughly investigated both through simulation and analytical models.",
"Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100 of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"The emergence of Delay Tolerant Networks (DTNs) has culminated in a new generation of wireless networking. We focus on a type of human-to-human communication in DTNs, where human behaviour exhibits the characteristics of networks by forming a community. We show the characteristics of such networks from extensive study of real-world human connectivity traces. We exploit distributed community detection from the trace and propose a Socio-Aware Overlay over detected communities for publish subscribe communication. Centrality nodes have the best visibility to the other nodes in the network. We create an overlay with such centrality nodes from communities. Distributed community detection operates when nodes (i.e. devices) are in contact by gossipping, and subscription propagation is performed along with this operation. We validate our message dissemination algorithms for publish subscribe with connectivity traces.",
""
]
}
|
1202.2888
|
2267148633
|
Maintaining high quality content is one of the foremost objectives of any web-based collaborative service that depends on a large number of users. In such systems, it is nearly impossible for automated scripts to judge semantics as it is to expect all editors to review the content. This catalyzes the need for trust-based mechanisms to ensure quality of an article immediately after an edit. In this paper, we build on previous work and develop a framework based on the ‘web of trust’ concept to calculate satisfaction scores for all users without the need for perusing the article. We derive some bounds for systems based on our mechanism and show that the optimization problem of selecting the best users to review an article is NP-Hard. Extensive simulations validate our model and results, and show that trust-based mechanisms are essential to improve eciency in any online collaborative editing platform.
|
With respect to quality of collaborative content, the recently proposed WikiTrust @cite_3 uses author reputation and number of edits to measure the trustworthiness of each word in a wiki article, and detect vandalism. Automated evaluations based on global reputation and article semantics are however beyond the scope of this paper and we stick to using reviews from a small subset of the user base to calculate satisfaction scores for the rest. As far as we are aware, this is the first study applying the web of trust model for collaborative work.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2129553551"
],
"abstract": [
"The Wikipedia is a collaborative encyclopedia: anyone can contribute to its articles simply by clicking on an \"edit\" button. The open nature of the Wikipedia has been key to its success, but has also created a challenge: how can readers develop an informed opinion on its reliability? We propose a system that computes quantitative values of trust for the text in Wikipedia articles; these trust values provide an indication of text reliability. The system uses as input the revision history of each article, as well as information about the reputation of the contributing authors, as provided by a reputation system. The trust of a word in an article is computed on the basis of the reputation of the original author of the word, as well as the reputation of all authors who edited text near the word. The algorithm computes word trust values that vary smoothly across the text; the trust values can be visualized using varying text-background colors. The algorithm ensures that all changes to an article's text are reflected in the trust values, preventing surreptitious content changes. We have implemented the proposed system, and we have used it to compute and display the trust of the text of thousands of articles of the English Wikipedia. To validate our trust-computation algorithms, we show that text labeled as low-trust has a significantly higher probability of being edited in the future than text labeled as high-trust."
]
}
|
1202.2525
|
2951895902
|
We study the problem of sampling a random signal with sparse support in frequency domain. Shannon famously considered a scheme that instantaneously samples the signal at equispaced times. He proved that the signal can be reconstructed as long as the sampling rate exceeds twice the bandwidth (Nyquist rate). Cand es, Romberg, Tao introduced a scheme that acquires instantaneous samples of the signal at random times. They proved that the signal can be uniquely and efficiently reconstructed, provided the sampling rate exceeds the frequency support of the signal, times logarithmic factors. In this paper we consider a probabilistic model for the signal, and a sampling scheme inspired by the idea of spatial coupling in coding theory. Namely, we propose to acquire non-instantaneous samples at random times. Mathematically, this is implemented by acquiring a small random subset of Gabor coefficients. We show empirically that this scheme achieves correct reconstruction as soon as the sampling rate exceeds the frequency support of the signal, thus reaching the information theoretic limit.
|
The sampling scheme developed here is inspired by the idea of , that recently proved successful in coding theory @cite_16 @cite_10 @cite_17 @cite_5 and was introduced to compressed sensing by Kudekar and Pfister @cite_15 . The basic idea, in this context, is to use suitable band diagonal sensing matrices. @cite_0 showed that, using the appropriate message passing reconstruction algorithm, and spatially-coupled' sensing matrices, a random @math -sparse signal @math can be recovered from @math measurements. This is a surprising result, given that standard compressed sensing methods achieve successful recovery from @math measurements.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_17"
],
"mid": [
"2073868986",
"2951478830",
"2952440034",
"1991528082",
"2135479785",
"2172679141"
],
"abstract": [
"Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.",
"We investigate spatially coupled code ensembles. For transmission over the binary erasure channel, it was recently shown that spatial coupling increases the belief propagation threshold of the ensemble to essentially the maximum a-priori threshold of the underlying component ensemble. This explains why convolutional LDPC ensembles, originally introduced by Felstrom and Zigangirov, perform so well over this channel. We show that the equivalent result holds true for transmission over general binary-input memoryless output-symmetric channels. More precisely, given a desired error probability and a gap to capacity, we can construct a spatially coupled ensemble which fulfills these constraints universally on this class of channels under belief propagation decoding. In fact, most codes in that ensemble have that property. The quantifier universal refers to the single ensemble code which is good for all channels but we assume that the channel is known at the receiver. The key technical result is a proof that under belief propagation decoding spatially coupled ensembles achieve essentially the area threshold of the underlying uncoupled ensemble. We conclude by discussing some interesting open problems.",
"Recently, it was observed that spatially-coupled LDPC code ensembles approach the Shannon capacity for a class of binary-input memoryless symmetric (BMS) channels. The fundamental reason for this was attributed to a \"threshold saturation\" phenomena derived by Kudekar, Richardson and Urbanke. In particular, it was shown that the belief propagation (BP) threshold of the spatially coupled codes is equal to the maximum a posteriori (MAP) decoding threshold of the underlying constituent codes. In this sense, the BP threshold is saturated to its maximum value. Moreover, it has been empirically observed that the same phenomena also occurs when transmitting over more general classes of BMS channels. In this paper, we show that the effect of spatial coupling is not restricted to the realm of channel coding. The effect of coupling also manifests itself in compressed sensing. Specifically, we show that spatially-coupled measurement matrices have an improved sparsity to sampling threshold for reconstruction algorithms based on verification decoding. For BP-based reconstruction algorithms, this phenomenon is also tested empirically via simulation. At the block lengths accessible via simulation, the effect is quite small and it seems that spatial coupling is not providing the gains one might expect. Based on the threshold analysis, however, we believe this warrants further study.",
"We present a class of convolutional codes defined by a low-density parity-check matrix and an iterative algorithm for decoding these codes. The performance of this decoding is close to the performance of turbo decoding. Our simulation shows that for the rate R=1 2 binary codes, the performance is substantially better than for ordinary convolutional codes with the same decoding complexity per information bit. As an example, we constructed convolutional codes with memory M=1025, 2049, and 4097 showing that we are about 1 dB from the capacity limit at a bit-error rate (BER) of 10 sup -5 and a decoding complexity of the same magnitude as a Viterbi decoder for codes having memory M=10.",
"A method is developed for representing any communication system geometrically. Messages and the corresponding signals are points in two \"function spaces,\" and the modulation process is a mapping of one space into the other. Using this representation, a number of results in communication theory are deduced concerning expansion and compression of bandwidth and the threshold effect. Formulas are found for the maximum rate of transmission of binary digits over a system when the signal is perturbed by various types of noise. Some of the properties of \"ideal\" systems which transmit at this maximum rate are discussed The equivalent number of binary digits per second for certain information sources is calculated.",
"Convolutional low-density parity-check (LDPC) ensembles, introduced by Felstrom and Zigangirov, have excellent thresholds and these thresholds are rapidly increasing functions of the average degree. Several variations on the basic theme have been proposed to date, all of which share the good performance characteristics of convolutional LDPC ensembles. We describe the fundamental mechanism that explains why “convolutional-like” or “spatially coupled” codes perform so well. In essence, the spatial coupling of individual codes increases the belief-propagation (BP) threshold of the new ensemble to its maximum possible value, namely the maximum a posteriori (MAP) threshold of the underlying ensemble. For this reason, we call this phenomenon “threshold saturation.” This gives an entirely new way of approaching capacity. One significant advantage of this construction is that one can create capacity-approaching ensembles with an error correcting radius that is increasing in the blocklength. Although we prove the “threshold saturation” only for a specific ensemble and for the binary erasure channel (BEC), empirically the phenomenon occurs for a wide class of ensembles and channels. More generally, we conjecture that for a large range of graphical systems a similar saturation of the “dynamical” threshold occurs once individual components are coupled sufficiently strongly. This might give rise to improved algorithms and new techniques for analysis."
]
}
|
1202.2525
|
2951895902
|
We study the problem of sampling a random signal with sparse support in frequency domain. Shannon famously considered a scheme that instantaneously samples the signal at equispaced times. He proved that the signal can be reconstructed as long as the sampling rate exceeds twice the bandwidth (Nyquist rate). Cand es, Romberg, Tao introduced a scheme that acquires instantaneous samples of the signal at random times. They proved that the signal can be uniquely and efficiently reconstructed, provided the sampling rate exceeds the frequency support of the signal, times logarithmic factors. In this paper we consider a probabilistic model for the signal, and a sampling scheme inspired by the idea of spatial coupling in coding theory. Namely, we propose to acquire non-instantaneous samples at random times. Mathematically, this is implemented by acquiring a small random subset of Gabor coefficients. We show empirically that this scheme achieves correct reconstruction as soon as the sampling rate exceeds the frequency support of the signal, thus reaching the information theoretic limit.
|
The results of @cite_0 were based on statistical mechanics methods and numerical simulations. A rigorous proof was provided in @cite_11 using approximate message passing (AMP) algorithms @cite_4 and the analysis tools provided by state evolution @cite_4 @cite_12 . Indeed, @cite_11 proved a more general result. Consider a sequence of signals @math indexed by the problem dimensions @math , and such that the empirical law of the entries of @math , @math , converges weakly to a limit @math with bounded second moment. Then, spatially-coupled sensing matrices under AMP reconstruction achieve (with high probability) robust recovery of @math , as long as the number of measurements is @math . Here @math is the (upper) Renyi information dimension of the probability distribution @math . This quantity first appeared in connection with compressed sensing in the work of Wu and Verd 'u @cite_2 . Taking an information-theoretic viewpoint, Wu and Verd 'u proved that the Renyi information dimension is the fundamental limit for analog compression.
|
{
"cite_N": [
"@cite_4",
"@cite_0",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"2082029531",
"2073868986",
"1986051087",
"2610971674",
"2571527823"
],
"abstract": [
"Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.",
"Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.",
"In Shannon theory, lossless source coding deals with the optimal compression of discrete sources. Compressed sensing is a lossless coding strategy for analog sources by means of multiplication by real-valued matrices. In this paper we study almost lossless analog compression for analog memoryless sources in an information-theoretic framework, in which the compressor or decompressor is constrained by various regularity conditions, in particular linearity of the compressor and Lipschitz continuity of the decompressor. The fundamental limit is shown to the information dimension proposed by Renyi in 1959.",
"“Approximate message passing” (AMP) algorithms have proved to be effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper, we provide rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed Gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs. The proof technique is fundamentally different from the standard approach to density evolution, in that it copes with a large number of short cycles in the underlying factor graph. It relies instead on a conditioning technique recently developed by Erwin Bolthausen in the context of spin glass theory.",
"We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX."
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
Designing a neural network capable of learning a set of patterns and recalling them later in presence of noise has been an active topic of research for the past three decades. Inspired by the Hebbian learning rule @cite_12 , Hopfield in his seminal work @cite_0 introduced the Hopfield network: an auto-associative neural mechanism of size @math with binary state neurons in which patterns are assumed to be binary vectors of length @math . The capacity of a Hopfield network under vanishing bit error probability was later shown to be @math by @cite_9 . Later on, proved that the capacity of Hopfield networks under vanishing block error probability requirement is @math @cite_19 . Similar results were obtained for sparse regular neural network in @cite_14 . It is also known that the capacity of neural associative memories could be enhanced if the patterns are in the sense that at any time instant many of the neurons are silent @cite_15 . However, even these schemes fail when required to correct a fair amount of erroneous bits as the information retrieval is not better compared to that of normal networks.
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_15",
"@cite_12"
],
"mid": [
"2158780150",
"2080792322",
"2128084896",
"1984431842",
"2133671888",
"22297218"
],
"abstract": [
"The authors investigate how good connectivity properties translate into good error-correcting behavior in sparse networks of threshold elements. They determine how the eigenvalues of the interconnection graph (which in turn reflect connectivity properties) relate to the quantities, number of items stored, amount of error-correction, radius of attraction, and rate of convergence in an associative memory model consisting of a sparse network of threshold elements or neurons. >",
"The Hopfield model for a neural network is studied in the limit when the number @math of stored patterns increases with the size @math of the network, as @math . It is shown that, despite its spin-glass features, the model exhibits associative memory for @math , @math . This is a result of the existence at low temperature of @math dynamically stable degenerate states, each of which is almost fully correlated with one of the patterns. These states become ground states at @math . The phase diagram of this rich spin-glass is described.",
"Abstract Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.",
"Techniques from coding theory are applied to study rigorously the capacity of the Hopfield associative memory. Such a memory stores n -tuple of 1 's. The components change depending on a hard-limited version of linear functions of all other components. With symmetric connections between components, a stable state is ultimately reached. By building up the connection matrix as a sum-of-outer products of m fundamental memories, one hopes to be able to recover a certain one of the m memories by using an initial n -tuple probe vector less than a Hamming distance n 2 away from the fundamental memory. If m fundamental memories are chosen at random, the maximum asympotic value of m in order that most of the m original memories are exactly recoverable is n (2 n) . With the added restriction that every one of the m fundamental memories be recoverable exactly, m can be no more than n (4 n) asymptotically as n approaches infinity. Extensions are also considered, in particular to capacity under quantization of the outer-product connection matrix. This quantized memory capacity problem is closely related to the capacity of the quantized Gaussian channel.",
"From the Publisher: This book is a comprehensive introduction to the neural network models currently under intensive study for computational applications. It is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest.",
""
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
In addition to neural networks capable of learning patterns gradually, in @cite_18 , the authors calculate the weight matrix offline (as opposed to gradual learning) using the pseudo-inverse rule @cite_15 which in return help them improve the capacity of a Hopfield network to @math with the ability of error correction.
|
{
"cite_N": [
"@cite_18",
"@cite_15"
],
"mid": [
"1964142240",
"2133671888"
],
"abstract": [
"A model of associate memory incorporating global linearity and pointwise nonlinearities in a state space of n-dimensional binary vectors is considered. Attention is focused on the ability to store a prescribed set of state vectors as attractors within the model. Within the framework of such associative nets, a specific strategy for information storage that utilizes the spectrum of a linear operator is considered in some detail. Comparisons are made between this spectral strategy and a prior scheme that utilizes the sum of Kronecker outer products of the prescribed set of state vectors, which are to function nominally as memories. The storage capacity of the spectral strategy is linear in n (the dimension of the state space under consideration), whereas an asymptotic result of n 4 log n holds for the storage capacity of the outer product scheme. Computer-simulated results show that the spectral strategy stores information more efficiently. The preprocessing costs incurred in the two algorithms are estimated, and recursive strategies are developed for their computation. >",
"From the Publisher: This book is a comprehensive introduction to the neural network models currently under intensive study for computational applications. It is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning. It also provides coverage of neural network applications in a variety of problems of both theoretical and practical interest."
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
Due to the low capacity of Hopfield networks, extension of associative memories to non-binary neural models has also been explored in the past. Hopfield addressed the case of continuous neurons and showed that similar to the binary case, neurons with states between @math and @math can memorize a set of random patterns, albeit with less capacity @cite_11 . In @cite_8 the authors investigated a multi-state complex-valued neural associative memories for which the estimated capacity is @math . Under the same model but using a different learning method, @cite_6 showed that the capacity can be increased to @math . However the complexity of the weight computation mechanism is prohibitive. To overcome this drawback, a Modified Gradient Descent learning Rule (MGDR) was devised in @cite_7 .
|
{
"cite_N": [
"@cite_8",
"@cite_7",
"@cite_6",
"@cite_11"
],
"mid": [
"2126944419",
"2123113196",
"2161780063",
"2112246162"
],
"abstract": [
"A model of a multivalued associative memory is presented. This memory has the form of a fully connected attractor neural network composed of multistate complex-valued neurons. Such a network is able to perform the task of storing and recalling gray-scale images. It is also shown that the complex-valued fully connected neural network may be considered as a generalization of a Hopfield network containing real-valued neurons. A computational energy function is introduced and evaluated in order to prove network stability for asynchronous dynamics. Storage capacity as related to the number of accessible neuron states is also estimated.",
"In this letter, new design methods for the complex-valued multistate Hopfield associative memories (CVHAMs) are presented. We show that the well-known projection rule proposed by Personnaz can be generalized to complex domain such that the weight matrix of the CVHAM can be designed by using a simple and effective method. The stability of the proposed CVHAM is analyzed by using energy function approach which shows that in synchronous update mode the proposed model is guaranteed to converge to a fixed point from any given initial state. Moreover, the projection geometry of the generalized projection rule (GPR) is discussed. In order to enhance the recall capability, a strategy of eliminating the spurious memories is also reported. The validity and the performance of the proposed methods are investigated by computer simulation",
"A method to store each element of an integral memory set M spl sub 1,2,...,K sup n as a fixed point into a complex-valued multistate Hopfield network is introduced. The method employs a set of inequalities to render each memory pattern as a strict local minimum of a quadratic energy landscape. Based on the solution of this system, it gives a recurrent network of n multistate neurons with complex and symmetric synaptic weights, which operates on the finite state space 1,2,...,K sup n to minimize this quadratic functional. Maximum number of integral vectors that can be embedded into the energy landscape of the network by this method is investigated by computer experiments. This paper also enlightens the performance of the proposed method in reconstructing noisy gray-scale images.",
"Abstract A model for a large network of \"neurons\" with a graded response (or sigmoid input-output relation) is studied. This deterministic system has collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons. The content- addressable memory and other emergent collective properties of the original model also are present in the graded response model. The idea that such collective properties are used in biological systems is given added credence by the continued presence of such properties for more nearly biological \"neurons.\" Collective analog electrical circuits of the kind described will certainly function. The collective states of the two models have a simple correspondence. The original model will continue to be useful for simulations, because its connection to graded response systems is established. Equations that include the effect of action potentials in the graded response system are also developed."
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
Given that even very complex offline learning methods can not improve the capacity of binary or multi-sate Hopfield networks, a line of recent work has made considerable efforts to exploit the inherent structure of the patterns in order to increase both capacity and error correction capabilities. Such methods either make use of higher order correlations of patterns or focus merely on those patterns that have some sort of redundancy. As a result, they differ from previous methods for which every possible random set of patterns was considered. Pioneering this prospect, Berrou and Gripon @cite_5 achieved considerable improvements in the pattern retrieval capacity of Hopfield networks, by utilizing clique-based coding. In some cases, the proposed approach results in capacities of around @math , which is much larger than @math in other methods. In @cite_17 , the authors used low correlation sequences similar to those employed in CDMA communications to increase the storage capacity of Hopfield networks to @math without requiring any separate decoding stage.
|
{
"cite_N": [
"@cite_5",
"@cite_17"
],
"mid": [
"2121160181",
"2136594928"
],
"abstract": [
"Coded recurrent neural networks with three levels of sparsity are introduced. The first level is related to the size of messages that are much smaller than the number of available neurons. The second one is provided by a particular coding rule, acting as a local constraint in the neural activity. The third one is a characteristic of the low final connection density of the network after the learning phase. Though the proposed network is very simple since it is based on binary neurons and binary connections, it is able to learn a large number of messages and recall them, even in presence of strong erasures. The performance of the network is assessed as a classifier and as an associative memory.",
"We consider the problem of neural association, which deals with the retrieval of a previously memorized pattern from its noisy version. The performance of various neural networks developed for this task may be judged in terms of their pattern retrieval capacities (the number of patterns that can be stored), and their error-correction (noise tolerance) capabilities. While significant progress has been made, most prior works in this area show poor performance with regard to pattern retrieval capacity and or error correction. In this paper, we propose two new methods to significantly increase the pattern retrieval capacity of the Hopfield and Bidirectional Associative Memories (BAM). The main idea is to store patterns drawn from a family of low correlation sequences, similar to those used in Code Division Multiple Access (CDMA) communications, instead of storing purely random patterns as in prior works. These low correlation patterns can be obtained from random sequences by pre-coding the original sequences via simple operations that both real and artificial neurons are capable of accomplishing."
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
In contrast to the pairwise correlation of the Hopfield model @cite_0 , @cite_21 deployed neural models: the state of the neurons not only depends on the state of their neighbors, but also on the correlation among them. Under this model, they showed that the storage capacity of a higher-order Hopfield network can be improved to @math , where @math is the degree of correlation considered. The main drawback of this model was again the huge computational complexity required in the learning phase. To address this difficulty while being able to capture higher-order correlations, a bipartite graph inspired from iterative coding theory was introduced in @cite_10 . Under the assumptions that the bipartite graph is known, sparse, and expander, the proposed algorithm increased the pattern retrieval capacity to @math , for some @math . The main drawbacks in the proposed approach is the lack of a learning algorithm as well as the assumption that the weight matrix should be an expander. The sparsity criterion on the other hand, as it was noted by the authors, is necessary in the recall phase and biologically more meaningful.
|
{
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_10"
],
"mid": [
"2128084896",
"2040870209",
"2150610158"
],
"abstract": [
"Abstract Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.",
"Quantitative expressions of long-term memory storage capacities of complex neural network are derived. The networks are made of neurons connected by synapses of any order, of the axono-axonal type considered by for example. The effect of link deletion possibly related to aging, is also considered. The central result of this study is that, within the framework of Hebb's laws, the number of stored bits is proportional to the number of synapses. The proportionality factor however, decreases when the order of involved synaptic contact increases. This tends to favor neural architectures with low-order synaptic connectivities. It is finally shown that the memory storage capacities can be optimized by a partition of the network into neuron clusters with size comparable with that observed for cortical microcolumns.",
"We consider the problem of neural association for a network of non-binary neurons. Here, the task is to recall a previously memorized pattern from its noisy version using a network of neurons whose states assume values from a finite number of non-negative integer levels. Prior works in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we consider storing patterns from a suitably chosen set of patterns, that are obtained by enforcing a set of simple constraints on the coordinates (such as those enforced in graph based codes). Such patterns may be generated from purely random information symbols by simple neural operations. Two simple neural update algorithms are presented, and it is shown that our proposed mechanisms result in a pattern retrieval capacity that is exponential in terms of the network size. Furthermore, using analytical results and simulations, we show that the suggested methods can tolerate a fair amount of errors in the input."
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
In this paper, we focus on solving the above two problems in @cite_10 . We start by proposing an iterative learning algorithm that identifies a weight matrix @math . The weight matrix @math should satisfy a set of linear constraints @math for all the patterns @math in the training data set, where @math . We then propose a novel network architecture which eliminates the need for the expansion criteria while achieving better performance than the error correction algorithm proposed in @cite_10 .
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2150610158"
],
"abstract": [
"We consider the problem of neural association for a network of non-binary neurons. Here, the task is to recall a previously memorized pattern from its noisy version using a network of neurons whose states assume values from a finite number of non-negative integer levels. Prior works in this area consider storing a finite number of purely random patterns, and have shown that the pattern retrieval capacities (maximum number of patterns that can be memorized) scale only linearly with the number of neurons in the network. In our formulation of the problem, we consider storing patterns from a suitably chosen set of patterns, that are obtained by enforcing a set of simple constraints on the coordinates (such as those enforced in graph based codes). Such patterns may be generated from purely random information symbols by simple neural operations. Two simple neural update algorithms are presented, and it is shown that our proposed mechanisms result in a pattern retrieval capacity that is exponential in terms of the network size. Furthermore, using analytical results and simulations, we show that the suggested methods can tolerate a fair amount of errors in the input."
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
Constructing a factor-graph model for neural associative memory has been also addressed in @cite_2 . However, there, the authors propose a general message-passing algorithm to memorize any set of random patterns while we focus on memorizing patterns belonging to subspaces with sparsity in mind as well. The difference would again be apparent in the pattern retrieval capacity (linear vs. exponential in network size).
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2085138324"
],
"abstract": [
"We show that a message-passing process allows us to store in binary ‘‘material'' synapses a number of random patterns which almost saturate the information theoretic bounds. We apply the learning algorithm to networks characterized by a wide range of different connection topologies and of size comparable with that of biological systems (e.g., n = 10^5-10^6). The algorithm can be turned into an online—fault tolerant—learning protocol of potential interest in modeling aspects of synaptic plasticity and in building neuromorphic devices"
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
Learning linear constraints by a neural network is hardly a new topic as one can learn a matrix orthogonal to a set of patterns in the training set (i.e., @math ) using simple neural learning rules (we refer the interested readers to @cite_3 and @cite_4 ). However, to the best of our knowledge, finding such a matrix subject to the sparsity constraints has not been investigated before. This problem can also be regarded as an instance of compressed sensing @cite_20 , in which the measurement matrix is given by the big patterns matrix @math and the set of measurements are the constraints we look to satisfy, denoted by the tall vector @math , which for simplicity reasons we assume to be all zero. Thus, we are interested in finding a sparse vector @math such that @math .
|
{
"cite_N": [
"@cite_20",
"@cite_4",
"@cite_3"
],
"mid": [
"2129638195",
"1718034124",
"2002554627"
],
"abstract": [
"Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p , where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0<p<1 and with overwhelming probability, our reconstruction ft, defined as the solution to the constraints yk=langf# ,Xkrang with minimal lscr1 norm, obeys parf-f#parlscr2lesCp middotRmiddot(K logN)-r, r=1 p-1 2. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed",
"Vector subspaces have been suggested for representations of structured information. In the theory of associative memory and associative information processing, the projection principle and subspaces are used in explaining the optimality of associative mappings and novelty filters. These formalisms seem to be very pertinent to neural networks, too. Based on these operations, the subspace method has been developed for a practical pattern-recognition algorithm. The method is reviewed, and some recent results on image analysis are given. >",
"A new modification of the subspace pattern recognition method, called the dual subspace pattern recognition (DSPR) method, is proposed, and neural network models combining both constrained Hebbian and anti-Hebbian learning rules are developed for implementing the DSPR method. An experimental comparison is made by using our model and a three-layer forward net with backpropagation learning. The results illustrate that our model can outperform the backpropagation model in suitable applications."
]
}
|
1202.2770
|
1931611617
|
The problem of neural network association is to retrieve a previously memorized pattern from its noisy version using a network of neurons. An ideal neural network should include three components simultaneously: a learning algorithm, a large pattern retrieval capacity and resilience against noise. Prior works in this area usually improve one or two aspects at the cost of the third. Our work takes a step forward in closing this gap. More specifically, we show that by forcing natural constraints on the set of learning patterns, we can drastically improve the retrieval capacity of our neural network. Moreover, we devise a learning algorithm whose role is to learn those patterns satisfying the above mentioned constraints. Finally we show that our neural network can cope with a fair amount of noise.
|
Nevertheless, many decoders proposed in this area are very complicated and cannot be implemented by a neural network using simple neuron operations. Some exceptions are @cite_16 and @cite_13 from which we derive our learning algorithm.
|
{
"cite_N": [
"@cite_16",
"@cite_13"
],
"mid": [
"2082029531",
"2118297240"
],
"abstract": [
"Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.",
"The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications."
]
}
|
1202.2215
|
1527671337
|
We propose a stochastic model for the diffusion of topics entering a social network modeled by a Watts-Strogatz graph. Our model sets into play an implicit competition between these topics as they vie for the attention of users in the network. The dynamics of our model are based on notions taken from real-world OSNs like Twitter where users either adopt an exogenous topic or copy topics from their neighbors leading to endogenous propagation. When instantiated correctly, the model achieves a viral regime where a few topics garner unusually good response from the network, closely mimicking the behavior of real-world OSNs. Our main contribution is our description of how clusters of proximate users that have spoken on the topic merge to form a large giant component making a topic go viral. This demonstrates that it is not weak ties but actually strong ties that play a major part in virality. We further validate our model and our hypotheses about its behavior by comparing our simulation results with the results of a measurement study conducted on real data taken from Twitter.
|
The relevance of a local density of links in spreading a contagion has been studied in the past under the header of complex propagation @cite_13 @cite_5 . These models show that when simultaneous existence of multiple activators is required for a node to get infected, long range links and randomness can slow down the propagation. The relevance of a clustered local community for sustenance has also been propounded by Young @cite_21 in the context of the spread of innovation. Goldenberg et. al. @cite_20 come to a somewhat contradictory conclusion, claiming that strong and weak ties are equally important in product adoption through word-of-mouth.
|
{
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"2155796078",
"2016468753",
"1495750374"
],
"abstract": [
"",
"Social norms and institutions are mechanisms that facilitate coordination between individuals. A social innovation is a novel mechanism that increases the welfare of the individuals who adopt it compared with the status quo. We model the dynamics of social innovation as a coordination game played on a network. Individuals experiment with a novel strategy that would increase their payoffs provided that it is also adopted by their neighbors. The rate at which a social innovation spreads depends on three factors: the topology of the network and in particular the extent to which agents interact in small local clusters, the payoff gain of the innovation relative to the status quo, and the amount of noise in the best response process. The analysis shows that local clustering greatly enhances the speed with which social innovations spread. It also suggests that the welfare gains from innovation are more likely to occur in large jumps than in a series of small incremental improvements.",
"Random links between otherwise distant nodes can greatly facilitate the propagation of disease or information, provided contagion can be transmitted by a single active node. However, we show that when the propagation requires simultaneous exposure to multiple sources of activation, called complex propagation, the effect of random links can be just the opposite; it can make the propagation more difficult to achieve. We numerically calculate critical points for a threshold model using several classes of complex networks, including an empirical social network. We also provide an estimation of the critical values in terms of vulnerable nodes.",
"Though word-of-mouth (w-o-m) communications is a pervasive and intriguing phenomenon, little is known on its underlying process of personal communications. Moreover as marketers are getting more interested in harnessing the power of w-o-m, for e-business and other net related activities, the effects of the different communications types on macro level marketing is becoming critical. In particular we are interested in the breakdown of the personal communication between closer and stronger communications that are within an individual's own personal group (strong ties) and weaker and less personal communications that an individual makes with a wide set of other acquaintances and colleagues (weak ties)."
]
}
|
1202.0884
|
800097653
|
Internet users such as individuals and organizations are subject to different types of epidemic risks such as worms, viruses, spams, and botnets. To reduce the probability of risk, an Internet user generally invests in traditional security mechanisms like anti-virus and anti-spam software, sometimes also known as mechanisms. However, according to security experts, such software (and their subsequent advancements) will not completely eliminate risk. Recent research efforts have considered the problem of residual risk elimination by proposing the idea of . In this regard, an important research problem is resolving information asymmetry issues associated with cyber-insurance contracts. In this paper we propose mechanisms to resolve information asymmetry in cyber-insurance. Our mechanisms are based on the (PA) model in microeconomic theory. We show that (1) optimal cyber-insurance contracts induced by our mechanisms only provide partial coverage to the insureds. This ensures greater self-defense efforts on the part of the latter to protect their computing systems, which in turn increases overall network security, (2) the level of deductible per network user contract increases in a concave manner with the topological degree of the user, and (3) a market for cyber-insurance can be made to exist in the presence of monopolistic insurers under effective mechanism design. Our methodology is applicable to any distributed network scenario in which a framework for cyber-insurance can be implemented.
|
Improving upon existing related works, we take a first step in this direction and propose a formal model to resolve the information asymmetry problem in distributed communication networks. Assuming that cyber-insurance is made mandatory @cite_21 , our model enables existence of cyber-insurance markets, i.e., the existence of market equilibria, under non-ideal insurance environments. To the best of our knowledge, this is the first model of its kind specific to Internet and distributed network environments.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"91264121"
],
"abstract": [
"Recent works on Internet risk management have proposed the idea of cyber-insurance to eliminate risks due to security threats, which cannot be tackled through traditional means such as by using antivirus and antivirus softwares. In reality, an Internet user faces risks due to security attacks as well as risks due to non-security related failures (e.g., reliability faults in the form of hardware crash, buffer overflow, etc.). These risk types are often indistinguishable by a naive user. However, a cyber-insurance agency would most likely insure risks only due to security attacks. In this case, it becomes a challenge for an Internet user to choose the right type of cyber-insurance contract as traditional optimal contracts, i.e., contracts for security attacks only, might prove to be sub-optimal for himself. In this paper, we address the problem of analyzing cyber-insurance solutions when a user faces risks due to both, security as well as non-security related failures. We propose Aegis, a simple and novel cyber-insurance model in which the user accepts a fraction (strictly positive) of loss recovery on himself and transfers rest of the loss recovery on the cyber-insurance agency. We mathematically show that only under conditions when buying cyber-insurance is mandatory, given an option, risk-averse Internet users would prefer Aegis contracts to traditional cyber-insurance contracts, under all premium types. This result firmly establishes the non-existence of traditional cyber-insurance markets when Aegis contracts are offered to users. We also derive an interesting counterintuitive result related to the Aegis framework: we show that an increase(decrease) in the premium of an Aegis contract may not always lead to decrease(increase) in its user demand. In the process, we also state the conditions under which the latter trend and its converse emerge. Our work proposes a new model of cyber-insurance for Internet security that extends all previous related models by accounting for the extra dimension of non-insurable risks. Aegis also incentivizes Internet users to take up more personal responsibility for protecting their systems."
]
}
|
1202.1083
|
2949401610
|
We consider the convergence time for solving the binary consensus problem using the interval consensus algorithm proposed by B ' en ' ezit, Thiran and Vetterli (2009). In the binary consensus problem, each node initially holds one of two states and the goal for each node is to correctly decide which one of these two states was initially held by a majority of nodes. We derive an upper bound on the expected convergence time that holds for arbitrary connected graphs, which is based on the location of eigenvalues of some contact rate matrices. We instantiate our bound for particular networks of interest, including complete graphs, paths, cycles, star-shaped networks, and Erd " os-R ' enyi random graphs; for these graphs, we compare our bound with alternative computations. We find that for all these examples our bound is tight, yielding the exact order with respect to the number of nodes. We pinpoint the fact that the expected convergence time critically depends on the voting margin defined as the difference between the fraction of nodes that initially held the majority and the minority states, respectively. The characterization of the expected convergence time yields exact relation between the expected convergence time and the voting margin, for some of these graphs, which reveals how the expected convergence time goes to infinity as the voting margin approaches zero. Our results provide insights into how the expected convergence time depends on the network topology which can be used for performance evaluation and network design. The results are of interest in the context of networked systems, in particular, peer-to-peer networks, sensor networks and distributed databases.
|
The work that is most closely related to ours is @cite_5 where the authors showed that the so called interval consensus algorithm guarantees correctness for arbitrary finite connected graphs. In particular, their work shows that for solving the binary consensus problem, it suffices to use to guarantee convergence to the correct consensus in a finite time, for every finite connected graph. Our work advances this line of work by establishing the first tight characterizations of the expected convergence time for the binary interval consensus.
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2156267951"
],
"abstract": [
"We design distributed and quantized average consensus algorithms on arbitrary connected networks. By construction, quantized algorithms cannot produce a real, analog average. Instead, our algorithm reaches consensus on the quantized interval that contains the average. We prove that this consensus in reached in finite time almost surely. As a byproduct of this convergence result, we show that the majority voting problem is solvable with only 2 bits of memory per agent."
]
}
|
1202.1083
|
2949401610
|
We consider the convergence time for solving the binary consensus problem using the interval consensus algorithm proposed by B ' en ' ezit, Thiran and Vetterli (2009). In the binary consensus problem, each node initially holds one of two states and the goal for each node is to correctly decide which one of these two states was initially held by a majority of nodes. We derive an upper bound on the expected convergence time that holds for arbitrary connected graphs, which is based on the location of eigenvalues of some contact rate matrices. We instantiate our bound for particular networks of interest, including complete graphs, paths, cycles, star-shaped networks, and Erd " os-R ' enyi random graphs; for these graphs, we compare our bound with alternative computations. We find that for all these examples our bound is tight, yielding the exact order with respect to the number of nodes. We pinpoint the fact that the expected convergence time critically depends on the voting margin defined as the difference between the fraction of nodes that initially held the majority and the minority states, respectively. The characterization of the expected convergence time yields exact relation between the expected convergence time and the voting margin, for some of these graphs, which reveals how the expected convergence time goes to infinity as the voting margin approaches zero. Our results provide insights into how the expected convergence time depends on the network topology which can be used for performance evaluation and network design. The results are of interest in the context of networked systems, in particular, peer-to-peer networks, sensor networks and distributed databases.
|
Finally, we would like to mention that in a bigger picture, our work relates to the cascading phenomena that arise in the context of social networks @cite_13 ; for example, in the viral marketing where an initial idea or behaviour held by a portion of the population, spreads through the network, yielding a wide adoption across the whole population @cite_9 .
|
{
"cite_N": [
"@cite_9",
"@cite_13"
],
"mid": [
"1845082465",
"1897619428"
],
"abstract": [
"Viral marketing takes advantage of preexisting social networks among customers to achieve large changes in behaviour. Models of influence spread have been studied in a number of domains, including the effect of \"word of mouth\" in the promotion of new products or the diffusion of technologies. A social network can be represented by a graph where the nodes are individuals and the edges indicate a form of social relationship. The flow of influence through this network can be thought of as an increasing process of active nodes: as individuals become aware of new technologies, they have the potential to pass them on to their neighbours. The goal of marketing is to trigger a large cascade of adoptions. In this paper, we develop a mathematical model that allows to analyze the dynamics of the cascading sequence of nodes switching to the new technology. To this end we describe a continuous-time and a discrete-time models and analyse the proportion of nodes that adopt the new technology over time.",
"The flow of information or influence through a large social network can be thought of as unfolding with the dynamics of an epidemic: as individuals become aware of new ideas, technologies, fads, rumors, or gossip, they have the potential to pass them on to their friends and colleagues, causing the resulting behavior to cascade through the network. We consider a collection of probabilistic and game-theoretic models for such phenomena proposed in the mathematical social sciences, as well as recent algorithmic work on the problem by computer scientists. Building on this, we discuss the implications of cascading behavior in a number of on-line settings, including word-of-mouth effects (also known as “viral marketing”) in the success of new products, and the influence of social networks in the growth of on-line"
]
}
|
1202.0855
|
1608463234
|
A significant challenge to make learning techniques more suitable for general purpose use is to move beyond i) complete supervision, ii) low dimensional data, iii) a single task and single view per instance. Solving these challenges allows working with "Big Data" problems that are typically high dimensional with multiple (but possibly incomplete) labelings and views. While other work has addressed each of these problems separately, in this paper we show how to address them together, namely semi-supervised dimension reduction for multi-task and multi-view learning (SSDR-MML), which performs optimization for dimension reduction and label inference in semi-supervised setting. The proposed framework is designed to handle both multi-task and multi-view learning settings, and can be easily adapted to many useful applications. Information obtained from all tasks and views is combined via reconstruction errors in a linear fashion that can be efficiently solved using an alternating optimization scheme. Our formulation has a number of advantages. We explicitly model the information combining mechanism as a data structure (a weight nearest-neighbor matrix) which allows investigating fundamental questions in multi-task and multi-view learning. We address one such question by presenting a general measure to quantify the success of simultaneous learning of multiple tasks or from multiple views. We show that our SSDR-MML approach can outperform many state-of-the-art baseline methods and demonstrate the effectiveness of connecting dimension reduction and learning.
|
Multi-task learning is motivated by the fact that a real world object naturally involves multiple related attributes, and thereby investigating them together could improve the total learning performance. MTL learns a problem together with other related problems at the same time, that allows the learner to use the commonality among the tasks. The hope is that by learning multiple tasks simultaneously one can improve performance over the " case. MTL has been studied from many different perspectives, such as neural networks among similar tasks @cite_1 , kernel methods and regularization networks @cite_14 , modeling task relatedness @cite_16 , and probabilistic models in Gaussian process @cite_34 @cite_28 and Dirichlet process @cite_25 . Although MTL techniques have been successfully applied to many real world applications, their usefulness are significantly weakened by the underlying relatedness assumption, while in practice some tasks are indeed unrelated and could induce destructive information to the learner. In this work, we propose a measure to quantify the success of learning, as to benefit from related tasks and reject the combining of unrelated (detrimental) tasks.
|
{
"cite_N": [
"@cite_14",
"@cite_28",
"@cite_1",
"@cite_16",
"@cite_34",
"@cite_25"
],
"mid": [
"2143104527",
"2119595900",
"1614862348",
"2036043322",
"2148522164",
"2131479143"
],
"abstract": [
"Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single--task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task--coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi--task learning methods and largely outperforms single--task learning using SVMs.",
"In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a \"free-form\" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.",
"This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional tasks. For many domains, acquiring inductive bias by collecting additional teaching signal may be more practical than the traditional approach of codifying domain-specific biases acquired from human expertise. We call this approach Multitask Learning (MTL). Since much of the power of an inductive learner follows directly from its inductive bias, multitask learning may yield more powerful learning. An empirical example of multitask connectionist learning is presented where learning improves by training one network on several related tasks at the same time. Multitask decision tree induction is also outlined.",
"The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task.",
"We consider the problem of multi-task learning, that is, learning multiple related functions. Our approach is based on a hierarchical Bayesian framework, that exploits the equivalence between parametric linear models and nonparametric Gaussian processes (GPs). The resulting models can be learned easily via an EM-algorithm. Empirical studies on multi-label text categorization suggest that the presented models allow accurate solutions of these multi-task problems.",
"Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP."
]
}
|
1202.0855
|
1608463234
|
A significant challenge to make learning techniques more suitable for general purpose use is to move beyond i) complete supervision, ii) low dimensional data, iii) a single task and single view per instance. Solving these challenges allows working with "Big Data" problems that are typically high dimensional with multiple (but possibly incomplete) labelings and views. While other work has addressed each of these problems separately, in this paper we show how to address them together, namely semi-supervised dimension reduction for multi-task and multi-view learning (SSDR-MML), which performs optimization for dimension reduction and label inference in semi-supervised setting. The proposed framework is designed to handle both multi-task and multi-view learning settings, and can be easily adapted to many useful applications. Information obtained from all tasks and views is combined via reconstruction errors in a linear fashion that can be efficiently solved using an alternating optimization scheme. Our formulation has a number of advantages. We explicitly model the information combining mechanism as a data structure (a weight nearest-neighbor matrix) which allows investigating fundamental questions in multi-task and multi-view learning. We address one such question by presenting a general measure to quantify the success of simultaneous learning of multiple tasks or from multiple views. We show that our SSDR-MML approach can outperform many state-of-the-art baseline methods and demonstrate the effectiveness of connecting dimension reduction and learning.
|
Various dimension reduction methods have been proposed to simplify learning problems, which generally fall into three categories: unsupervised, supervised, and semi-supervised. In contrast to traditional classification tasks where classes are mutually exclusive, the classes in multi-task label learning are actually overlapped and correlated. Thus, two specialized multi-task dimension reduction algorithms have been proposed in @cite_17 and @cite_6 , both of which try to capture the correlations between multiple tasks. However, the usefulness of such methods is dramatically limited by requiring complete label knowledge, which is very expensive to obtain and even impossible for those extremely large dataset, e.g. web images annotation. In order to utilize unlabeled data, there are many semi-supervised multi-label learning algorithms have been proposed @cite_7 @cite_3 , which solve learning problem by optimizing the objective function over graph or hypergraph. However, the performance of such approach is weakened by the lack of the concatenation of dimension reduction and learning algorithm. To the best of our knowledge, @cite_10 is the first attempt to connect dimension reduction and multi-task label learning, but it suffers from the inability of utilizing unlabeled data.
|
{
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_6",
"@cite_10",
"@cite_17"
],
"mid": [
"1592096085",
"2094651715",
"2146012283",
"183078661",
"2124982040"
],
"abstract": [
"Multi-label learning refers to the problems where an instance can be assigned to more than one category. In this paper, we present a novel Semi-supervised algorithm for Multi-label learning by solving a Sylvester Equation (SMSE). Two graphs are first constructed on instance level and category level respectively. For instance level, a graph is defined based on both labeled and unlabeled instances, where each node represents one instance and each edge weight reflects the similarity between corresponding pairwise instances. Similarly, for category level, a graph is also built based on all the categories, where each node represents one category and each edge weight reflects the similarity between corresponding pairwise categories. A regularization framework combining two regularization terms for the two graphs is suggested. The regularization term for instance graph measures the smoothness of the labels of instances, and the regularization term for category graph measures the smoothness of the labels of categories. We show that the labels of unlabeled data finally can be obtained by solving a Sylvester Equation. Experiments on RCV1 data set show that SMSE can make full use of the unlabeled data information as well as the correlations among categories and achieve good performance. In addition, we give a SMSE’s extended application on collaborative filtering.",
"A hypergraph is a generalization of the traditional graph in which the edges are arbitrary non-empty subsets of the vertex set. It has been applied successfully to capture high-order relations in various domains. In this paper, we propose a hypergraph spectral learning formulation for multi-label classification, where a hypergraph is constructed to exploit the correlation information among different labels. We show that the proposed formulation leads to an eigenvalue problem, which may be computationally expensive especially for large-scale problems. To reduce the computational cost, we propose an approximate formulation, which is shown to be equivalent to a least squares problem under a mild condition. Based on the approximate formulation, efficient algorithms for solving least squares problems can be applied to scale the formulation to very large data sets. In addition, existing regularization techniques for least squares can be incorporated into the model for improved generalization performance. We have conducted experiments using large-scale benchmark data sets, and experimental results show that the proposed hypergraph spectral learning formulation is effective in capturing the high-order relations in multi-label problems. Results also indicate that the approximate formulation is much more efficient than the original one, while keeping competitive classification performance.",
"Latent semantic indexing (LSI) is a well-known unsupervised approach for dimensionality reduction in information retrieval. However if the output information (i.e. category labels) is available, it is often beneficial to derive the indexing not only based on the inputs but also on the target values in the training data set. This is of particular importance in applications with multiple labels, in which each document can belong to several categories simultaneously. In this paper we introduce the multi-label informed latent semantic indexing (MLSI) algorithm which preserves the information of inputs and meanwhile captures the correlations between the multiple outputs. The recovered \"latent semantics\" thus incorporate the human-annotated category information and can be used to greatly improve the prediction accuracy. Empirical study based on two data sets, Reuters-21578 and RCV1, demonstrates very encouraging results.",
"Dimensionality reduction is an essential step in high-dimensional data analysis. Many dimensionality reduction algorithms have been applied successfully to multi-class and multi-label problems. They are commonly applied as a separate data preprocessing step before classification algorithms. In this paper, we study a joint learning framework in which we perform dimensionality reduction and multi-label classification simultaneously. We show that when the least squares loss is used in classification, this joint learning decouples into two separate components, i.e., dimensionality reduction followed by multi-label classification. This analysis partially justifies the current practice of a separate application of dimensionality reduction for classification problems. We extend our analysis using other loss functions, including the hinge loss and the squared hinge loss. We further extend the formulation to the more general case where the input data for different class labels may differ, overcoming the limitation of traditional dimensionality reduction algorithms. Experiments on benchmark data sets have been conducted to evaluate the proposed joint formulations.",
"Multi-label learning deals with data associated with multiple labels simultaneously. Like other machine learning and data mining tasks, multi-label learning also suffers from the curse of dimensionality. Although dimensionality reduction has been studied for many years, multi-label dimensionality reduction remains almost untouched. In this paper, we propose a multi-label dimensionality reduction method, MDDM, which attempts to project the original data into a lower-dimensional feature space maximizing the dependence between the original feature description and the associated class labels. Based on the Hilbert-Schmidt Independence Criterion, we derive a closed-form solution which enables the dimensionality reduction process to be efficient. Experiments validate the performance of MDDM."
]
}
|
1202.0855
|
1608463234
|
A significant challenge to make learning techniques more suitable for general purpose use is to move beyond i) complete supervision, ii) low dimensional data, iii) a single task and single view per instance. Solving these challenges allows working with "Big Data" problems that are typically high dimensional with multiple (but possibly incomplete) labelings and views. While other work has addressed each of these problems separately, in this paper we show how to address them together, namely semi-supervised dimension reduction for multi-task and multi-view learning (SSDR-MML), which performs optimization for dimension reduction and label inference in semi-supervised setting. The proposed framework is designed to handle both multi-task and multi-view learning settings, and can be easily adapted to many useful applications. Information obtained from all tasks and views is combined via reconstruction errors in a linear fashion that can be efficiently solved using an alternating optimization scheme. Our formulation has a number of advantages. We explicitly model the information combining mechanism as a data structure (a weight nearest-neighbor matrix) which allows investigating fundamental questions in multi-task and multi-view learning. We address one such question by presenting a general measure to quantify the success of simultaneous learning of multiple tasks or from multiple views. We show that our SSDR-MML approach can outperform many state-of-the-art baseline methods and demonstrate the effectiveness of connecting dimension reduction and learning.
|
The study of semi-supervised learning is motivated by the fact that while labeled data are often scarce and expensive to obtain, unlabeled data are usually abundant and easy to obtain. It mainly aims to address the problem where the labeled data are too few to build a good classifier by using the large amount of unlabeled data. Among various semi-supervised learning approaches, graph transduction has attracted an increasing amount of interest: @cite_30 introduces an approach based on a random field model defined on a weighted graph over both the unlabeled and labeled data; @cite_13 proposes a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. @cite_29 extend the formulation by inducing spectral kernel learning to semi-supervised learning, as to allow the graph adaptation during the label diffusion process. Another interesting direction for semi-supervised learning is proposed in @cite_4 , where the learning with unlabeled data is performed in the context of Gaussian process. The encouraging results of many proposed algorithms demonstrate the effectiveness of using unlabeled data. A comprehensive survey on semi-supervised learning can be found in @cite_21 .
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_29",
"@cite_21",
"@cite_13"
],
"mid": [
"2139823104",
"2104156750",
"1495400410",
"2136504847",
"2154455818"
],
"abstract": [
"An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propagation. The resulting learning algorithms have intimate connections with random walks, electric networks, and spectral graph theory. We discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm's ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text classification tasks.",
"We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a \"null category noise model\" (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits.",
"Typical graph-theoretic approaches for semi-supervised classification infer labels of unlabeled instances with the help of graph Laplacians. Founded on the spectral decomposition of the graph Laplacian, this paper learns a kernel matrix via minimizing the leave-one-out classification error on the labeled instances. To this end, an efficient algorithm is presented based on linear programming, resulting in a transductive spectral kernel. The idea of our algorithm stems from regularization methodology and also has a nice interpretation in terms of spectral clustering. A simple classifier can be readily built upon the learned kernel, which suffices to give prediction for any data point aside from those in the available dataset. Besides this usage, the spectral kernel can be effectively used in tandem with conventional kernel machines such as SVMs. We demonstrate the efficacy of the proposed algorithm through experiments carried out on challenging classification tasks.",
"Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.",
"We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data."
]
}
|
1202.1567
|
2117613335
|
To save time and money, businesses and individuals have begun outsourcing their data and computations to cloud computing services. These entities would, however, like to ensure that the queries they request from the cloud services are being computed correctly. In this paper, we use the principles of economics and competition to vastly reduce the complexity of query verification on outsourced data. We consider two cases: First, we consider the scenario where multiple non-colluding data outsourcing services exist, and then we consider the case where only a single outsourcing service exists. Using a game theoretic model, we show that given the proper incentive structure, we can effectively deter dishonest behavior on the part of the data outsourcing services with very few computational and monetary resources. We prove that the incentive for an outsourcing service to cheat can be reduced to zero. Finally, we show that a simple verification method can achieve this reduction through extensive experimental evaluation.
|
Several scholarly works have outlined query verification methods. The vast majority of these works focus on specific types of queries. Some focus only on selection @cite_15 @cite_18 @cite_11 @cite_8 @cite_13 , while others focus on relational queries such as selection, projection, and joins @cite_12 @cite_2 . Still others focus only on aggregation queries like sum, count, and average @cite_3 @cite_7 @cite_4 . Some of these processes @cite_19 @cite_4 require different verification schemes for each type of query, or even each individual query, requiring that the subscriber knows which queries will be asked in advance.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"",
"2942812477",
"1582090474",
"",
"1566967335",
"2052388228",
"2031916245",
"2109549608",
"",
"2075280979"
],
"abstract": [
"",
"",
"We are interested in the integrity of the query results from an outsourced database service provider. Alice passes a set D of d-dimensional points, together with some authentication tag T, to an untrusted service provider Bob. Later, Alice issues some query over D to Bob, and Bob should produce the query result and a proof based on D and T. Alice wants to verify the integrity of the query result with the help of the proof, using only the private key. In this paper, we consider aggregate query conditional on multidimensional range selection. In its basic form, a query asks for the total number of data points within a d-dimensional range. We are concerned about the number of communication bits required and the size of the tag T. We give a scheme that requires O(d log N) communication bits to authenticate an aggregate count query conditional on d-dimensional range selection, where N is the number of points in the dataset. The security of our scheme relies on Generalized Knowledge of Exponent Assumption proposed by Wu and Stinson [1]. The low communication bandwidth is achieved due to a new functional encryption scheme, which exploits a special property of BBG HIBE scheme [2]. Besides counting, our scheme can be extended to support summing, finding of the minimum and usual (nonaggregate) range selection with similar complexity, and the proposed approach potentially can be applied to other queries by using suitable functional encryption schemes.",
"An increasing number of enterprises outsource their IT services to third parties who can offer these services for a much lower cost due to economy of scale. Quality of service is a major concern in outsourcing. In particular, query integrity, which means that query results returned by the service provider are both correct and complete, must be assured. Previous work requires clients to manage data locally to audit the results sent back by the server, or database engine to be modified for generating authenticated results. In this paper, we introduce a novel integrity audit mechanism that eliminating these costly requirements. In our approach, we insert a small amount of records into an outsourced database so that the integrity of the system can be effectively audited by analyzing the inserted records in the query results. We study both randomized and deterministic approaches for generating the inserted records, as how these records are generated has significant implications for storage and performance. Furthermore, we show that our method is provable secure, which means it can withstand any attacks by an adversary whose computation power is bounded. Our analytical and empirical results demonstrate the effectiveness of our method.",
"",
"In this paper we propose and analyze a method for proofs of actual query execution in an outsourced database framework, in which a client outsources its data management needs to a specialized provider. The solution is not limited to simple selection predicate queries but handles arbitrary query types. While this work focuses mainly on read-only, compute-intensive (e.g. data-mining) queries, it also provides preliminary mechanisms for handling data updates (at additional costs). We introduce query execution proofs; for each executed batch of queries the database service provider is required to provide a strong cryptographic proof that provides assurance that the queries were actually executed correctly over their entire target data set. We implement a proof of concept and present experimental results in a real-world data mining application, proving the deployment feasibility of our solution. We analyze the solution and show that its overheads are reasonable and are far outweighed by the added security benefits. For example an assurance level of over 95 can be achieved with less than 25 execution time overhead.",
"In data publishing, the owner delegates the role of satisfying user queries to a third-party publisher. As the publisher may be untrusted or susceptible to attacks, it could produce incorrect query results. In this paper, we introduce a scheme for users to verify that their query results are complete (i.e., no qualifying tuples are omitted) and authentic (i.e., all the result values originated from the owner). The scheme supports range selection on key and non-key attributes, project as well as join queries on relational databases. Moreover, the proposed scheme complies with access control policies, is computationally secure, and can be implemented efficiently.",
"In the third-party model for the distribution of data, the trusted data creator or owner provides an untrusted party V with data and integrity verification (IV) items for that data. When a user U gets a subset of the data at D or is already in possession of that subset, U may request from D the IV items that make it possible for U to verify the integrity of its data: D must then provide U with the (hopefully small) number of needed IVs. Most of the published work in this area uses the Merkle tree or variants thereof. For the problem of 2-dimensional range data, the best published solutions require V to store O(n log n) IV items for a database of n items, and allow a user IA to be sent only O(log n) of those IVs for the purpose of verifying the integrity of the data it receives from D (regardless of the size of lA's query rectangle). For data that is modeled as a 2-dimensional grid (such as GIS or image data), this paper shows that better bounds are possible: The number of IVs stored at D (and the time it takes to compute them) can be brought down to O(n), and the number of IVs sent to IA for verification can be brought down to a constant.",
"Database outsourcing requires that a query server constructs a proof of result correctness, which can be verified by the client using the data owner's signature. Previous authentication techniques deal with range queries on a single relation using an authenticated data structure (ADS). On the other hand, authenticated join processing is inherently more complex than ranges since only the base relations (but not their combination) are signed by the owner. In this paper, we present three novel join algorithms depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. We experimentally demonstrate that the proposed methods outperform two benchmark algorithms, often by several orders of magnitude, on all performance metrics, and effectively shift the workload to the outsourcing service. Finally, we extend our techniques to complex queries that combine multi-way joins with selections and projections.",
"",
"In the Outsourced Database (ODB) model, entities outsource their data management needs to a third-party service provider. Such a service provider offers mechanisms for its clients to create, store, update, and access (query) their databases. This work provides mechanisms to ensure data integrity and authenticity for outsourced databases. Specifically, this article provides mechanisms that assure the querier that the query results have not been tampered with and are authentic (with respect to the actual data owner). It investigates both the security and efficiency aspects of the problem and constructs several secure and practical schemes that facilitate the integrity and authenticity of query replies while incurring low computational and communication costs."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.