id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1309.1829
On the $k$-error linear complexity for $2^n$-periodic binary sequences via Cube Theory
cs.CR cs.IT math.IT
The linear complexity and k-error linear complexity of a sequence have been used as important measures of keystream strength, hence designing a sequence with high linear complexity and $k$-error linear complexity is a popular research topic in cryptography. In this paper, the concept of stable $k$-error linear complexity is proposed to study sequences with stable and large $k$-error linear complexity. In order to study k-error linear complexity of binary sequences with period $2^n$, a new tool called cube theory is developed. By using the cube theory, one can easily construct sequences with the maximum stable $k$-error linear complexity. For such purpose, we first prove that a binary sequence with period $2^n$ can be decomposed into some disjoint cubes and further give a general decomposition approach. Second, it is proved that the maximum $k$-error linear complexity is $2^n-(2^l-1)$ over all $2^n$-periodic binary sequences, where $2^{l-1}\le k<2^{l}$. Thirdly, a characterization is presented about the $t$th ($t>1$) decrease in the $k$-error linear complexity for a $2^n$-periodic binary sequence $s$ and this is a continuation of Kurosawa et al. recent work for the first decrease of k-error linear complexity. Finally, A counting formula for $m$-cubes with the same linear complexity is derived, which is equivalent to the counting formula for $k$-error vectors. The counting formula of $2^n$-periodic binary sequences which can be decomposed into more than one cube is also investigated, which extends an important result by Etzion et al..
1309.1830
Radar shadow detection in SAR images using DEM and projections
cs.CV
Synthetic aperture radar (SAR) images are widely used in target recognition tasks nowadays. In this letter, we propose an automatic approach for radar shadow detection and extraction from SAR images utilizing geometric projections along with the digital elevation model (DEM) which corresponds to the given geo-referenced SAR image. First, the DEM is rotated into the radar geometry so that each row would match that of a radar line of sight. Next, we extract the shadow regions by processing row by row until the image is covered fully. We test the proposed shadow detection approach on different DEMs and a simulated 1D signals and 2D hills and volleys modeled by various variance based Gaussian functions. Experimental results indicate the proposed algorithm produces good results in detecting shadows in SAR images with high resolution.
1309.1853
A General Two-Step Approach to Learning-Based Hashing
cs.LG cs.CV
Most existing approaches to hashing apply a single form of hash function, and an optimization process which is typically deeply coupled to this specific form. This tight coupling restricts the flexibility of the method to respond to the data, and can result in complex optimization problems that are difficult to solve. Here we propose a flexible yet simple framework that is able to accommodate different types of loss functions and hash functions. This framework allows a number of existing approaches to hashing to be placed in context, and simplifies the development of new problem-specific hashing methods. Our framework decomposes hashing learning problem into two steps: hash bit learning and hash function learning based on the learned bits. The first step can typically be formulated as binary quadratic problems, and the second step can be accomplished by training standard binary classifiers. Both problems have been extensively studied in the literature. Our extensive experiments demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art.
1309.1862
Observability transitions in correlated networks
physics.soc-ph cond-mat.stat-mech cs.SI
Yang, Wang, and Motter [Phys. Rev. Lett. 109, 258701 (2012)] analyzed a model for network observability transitions in which a sensor placed on a node makes the node and the adjacent nodes observable. The size of the connected components comprising the observable nodes is a major concern of the model. We analyze this model in random heterogeneous networks with degree correlation. With numerical simulations and analytical arguments based on generating functions, we find that negative degree correlation makes networks more observable. This result holds true both when the sensors are placed on nodes one by one in a random order and when hubs preferentially receive the sensors. Finally, we numerically optimize networks with a fixed degree sequence with respect to the size of the largest observable component. Optimized networks have negative degree correlation induced by the resulting hub-repulsive structure; the largest hubs are rarely connected to each other, in contrast to the rich-club phenomenon of networks.
1309.1864
Timing estimation in distributed sensor and control systems with central processing
cs.SY stat.AP
We consider the problem of estimating timing of measurements and actuation in distributed sensor and control systems with central processing. The focus is on direct timing estimation for scenarios where clock synchronization is not feasible or desirable. Models of the timing and central and peripheral time stamps are motivated and derived from underlying clock and communication delay definitions and models. Heuristics for constructing a system time is presented and it is outlined how the joint timing and the plant state estimation can be handled. For a simple set of underlying clock and communication delay models, inclusion of peripheral unit time stamps is shown to reduce jitter, and it is argued that in general it will give significant jitter reduction. Finally, a numerical example is given of a contemporary system design.
1309.1884
Tractable vs. Intractable Cases of Matching Dependencies for Query Answering under Entity Resolution
cs.DB cs.CC cs.LO
Matching Dependencies (MDs) are a relatively recent proposal for declarative entity resolution. They are rules that specify, on the basis of similarities satisfied by values in a database, what values should be considered duplicates, and have to be matched. On the basis of a chase-like procedure for MD enforcement, we can obtain clean (duplicate-free) instances; actually possibly several of them. The resolved answers to queries are those that are invariant under the resulting class of resolved instances. Previous work identified certain classes of queries and sets of MDs for which resolved query answering is tractable. Special emphasis was placed on cyclic sets of MDs. In this work we further investigate the complexity of this problem, identifying intractable cases, and exploring the frontier between tractability and intractability. We concentrate mostly on acyclic sets of MDs. For a special case we obtain a dichotomy result relative to NP-hardness.
1309.1890
Evolution of the Chilean Web: A Larger Study
cs.SI physics.soc-ph
In this paper we extend our previous and only study on the dynamics of the Chilean Web. This new study doubles the time period and to the best of our knowledge is the only study of its type known about any country in the Web. The new results corroborate the trends found before, in particular the exponential growth of the Web, and reinforce the conclusion that the Web is more chaotic than we would like. Hence, modeling most Web characteristics is not trivial.
1309.1913
Dynamic Team Theory of Stochastic Differential Decision Systems with Decentralized Noisy Information Structures via Girsanov's Measure Transformation
math.OC cs.SY math.ST stat.TH
In this paper, we present two methods which generalize static team theory to dynamic team theory, in the context of continuous-time stochastic nonlinear differential decentralized decision systems, with relaxed strategies, which are measurable to different noisy information structures. For both methods we apply Girsanov's measure transformation to obtain an equivalent dynamic team problem under a reference probability measure, so that the observations and information structures available for decisions, are not affected by any of the team decisions. The first method is based on function space integration with respect to products of Wiener measures, and generalizes Witsenhausen's [1] definition of equivalence between discrete-time static and dynamic team problems. The second method is based on stochastic Pontryagin's maximum principle. The team optimality conditions are given by a "Hamiltonian System" consisting of forward and backward stochastic differential equations, and a conditional variational Hamiltonian with respect to the information structure of each team member, expressed under the initial and a reference probability space via Girsanov's measure transformation. Under global convexity conditions, we show that that PbP optimality implies team optimality. In addition, we also show existence of team and PbP optimal relaxed decentralized strategies (conditional distributions), in the weak$^*$ sense, without imposing convexity on the action spaces of the team members. Moreover, using the embedding of regular strategies into relaxed strategies, we also obtain team and PbP optimality conditions for regular team strategies, which are measurable functions of decentralized information structures, and we use the Krein-Millman theorem to show realizability of relaxed strategies by regular strategies.
1309.1928
Rollover Preventive Force Synthesis at Active Suspensions in a Vehicle Performing a Severe Maneuver with Wheels Lifted off
cs.SY
Among the intelligent safety technologies for road vehicles, active suspensions controlled by embedded computing elements for preventing rollover have received a lot of attention. The existing models for synthesizing and allocating forces in such suspensions are conservatively based on the constraint that no wheels lift off the ground. However, in practice, smart/active suspensions are more necessary in the situation where the wheels have just lifted off the ground. The difficulty in computing control in the last situation is that the problem requires satisfying disjunctive constraints on the dynamics. To the authors',knowledge, no efficient solution method is available for the simulation of dynamics with disjunctive constraints and thus hardware realizable and accurate force allocation in an active suspension tends to be a difficulty. In this work we give an algorithm for and simulate numerical solutions of the force allocation problem as an optimal control problem constrained by dynamics with disjunctive constraints. In particular we study the allocation and synthesis of time-dependent active suspension forces in terms of sensor output data in order to stabilize the roll motion of the road vehicle. An equivalent constraint in the form of a convex combination (hull) is proposed to satisfy the disjunctive constraints. The validated numerical simulations show that it is possible to allocate and synthesize control forces at the active suspensions from sensor output data such that the forces stabilize the roll moment of the vehicle with its wheels just lifted off the ground during arbitrary fish-hook maneuvers.
1309.1939
The placement of the head that minimizes online memory: a complex systems approach
cs.CL nlin.AO physics.data-an physics.soc-ph
It is well known that the length of a syntactic dependency determines its online memory cost. Thus, the problem of the placement of a head and its dependents (complements or modifiers) that minimizes online memory is equivalent to the problem of the minimum linear arrangement of a star tree. However, how that length is translated into cognitive cost is not known. This study shows that the online memory cost is minimized when the head is placed at the center, regardless of the function that transforms length into cost, provided only that this function is strictly monotonically increasing. Online memory defines a quasi-convex adaptive landscape with a single central minimum if the number of elements is odd and two central minima if that number is even. We discuss various aspects of the dynamics of word order of subject (S), verb (V) and object (O) from a complex systems perspective and suggest that word orders tend to evolve by swapping adjacent constituents from an initial or early SOV configuration that is attracted towards a central word order by online memory minimization. We also suggest that the stability of SVO is due to at least two factors, the quasi-convex shape of the adaptive landscape in the online memory dimension and online memory adaptations that avoid regression to SOV. Although OVS is also optimal for placing the verb at the center, its low frequency is explained by its long distance to the seminal SOV in the permutation space.
1309.1952
A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries
stat.ML cs.LG math.OC
We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our main result is a strategy to approximately recover the unknown dictionary using an efficient algorithm. Our algorithm is a clustering-style procedure, where each cluster is used to estimate a dictionary element. The resulting solution can often be further cleaned up to obtain a high accuracy estimate, and we provide one simple scenario where $\ell_1$-regularized regression can be used for such a second stage.
1309.1973
Regret-Based Multi-Agent Coordination with Uncertain Task Rewards
cs.AI
Many multi-agent coordination problems can be represented as DCOPs. Motivated by task allocation in disaster response, we extend standard DCOP models to consider uncertain task rewards where the outcome of completing a task depends on its current state, which is randomly drawn from unknown distributions. The goal of solving this problem is to find a solution for all agents that minimizes the overall worst-case loss. This is a challenging problem for centralized algorithms because the search space grows exponentially with the number of agents and is nontrivial for standard DCOP algorithms we have. To address this, we propose a novel decentralized algorithm that incorporates Max-Sum with iterative constraint generation to solve the problem by passing messages among agents. By so doing, our approach scales well and can solve instances of the task allocation problem with hundreds of agents and tasks.
1309.1976
Source Broadcasting to the Masses: Separation has a Bounded Loss
cs.IT math.IT
This work discusses the source broadcasting problem, i.e. transmitting a source to many receivers via a broadcast channel. The optimal rate-distortion region for this problem is unknown. The separation approach divides the problem into two complementary problems: source successive refinement and broadcast channel transmission. We provide bounds on the loss incorporated by applying time-sharing and separation in source broadcasting. If the broadcast channel is degraded, it turns out that separation-based time-sharing achieves at least a factor of the joint source-channel optimal rate, and this factor has a positive limit even if the number of receivers increases to infinity. For the AWGN broadcast channel a better bound is introduced, implying that all achievable joint source-channel schemes have a rate within one bit of the separation-based achievable rate region for two receivers, or within $\log_2 T$ bits for $T$ receivers.
1309.2002
Plug-and-play distributed state estimation for linear systems
cs.SY math.OC
This paper proposes a state estimator for large-scale linear systems described by the interaction of state-coupled subsystems affected by bounded disturbances. We equip each subsystem with a Local State Estimator (LSE) for the reconstruction of the subsystem states using pieces of information from parent subsystems only. Moreover we provide conditions guaranteeing that the estimation errors are confined into prescribed polyhedral sets and converge to zero in absence of disturbances. Quite remarkably, the design of an LSE is recast into an optimization problem that requires data from the corresponding subsystem and its parents only. This allows one to synthesize LSEs in a Plug-and-Play (PnP) fashion, i.e. when a subsystem gets added, the update of the whole estimator requires at most the design of an LSE for the subsystem and its parents. Theoretical results are backed up by numerical experiments on a mechanical system.
1309.2018
A Direct Power Controlled and Series Compensated EHV Transmission Line
cs.SY
This paper presents the design and analysis of a compensation method with application to a 345 kV 480 MVA three-phase transmission line. The compensator system includes a series injected voltage source converter that minimizes the resonance effects of capacitor line reactance. This creates an ability to compensate for the effects of subsynchronous resonance and thereby increase line loadability and control real and reactive power flows. The granularity of power flow control and simultaneous stabilization is achieved by the method of direct decoupled power control (DPC). The design process is detailed with respect to optimal response characteristics considering variations of line parameters, realistic transformer impedances, and maximum ramp response rates. Line effects are demonstrated in a PLECS model in MATLAB, and compensation control system functionality is verified. A case study is provided of a 345 kV transmission line from an EMTP simulation in PSCAD that accounts for distributed parameter effects that are encountered in physical EHV transmission lines. This demonstrates the improvement in stability to power system transients as well as damping of power system oscillations.
1309.2024
A Robust Continuous Time Fixed Lag Smoother for Nonlinear Uncertain Systems
cs.SY
This paper presents a robust fixed lag smoother for a class of nonlinear uncertain systems. A unified scheme, which combines a nonlinear robust estimator with a stable fixed lag smoother, is presented to improve the error covariance of the estimation. The robust fixed lag smoother is based on the use of Integral Quadratic Constraints and minimax LQG control. The state estimator uses a copy of the system nonlinearity in the estimator and combines an approximate model of the delayed states to produce a smoothed signal. In order to see the effectiveness of the method, it is applied to a quantum optical phase estimation problem. Results show significant improvement in the error covariance of the estimator using fixed lag smoother in the presence of nonlinear uncertainty.
1309.2031
Cooperative Wireless Sensor Network Positioning via Implicit Convex Feasibility
cs.IT math.IT math.OC
We propose a distributed positioning algorithm to estimate the unknown positions of a number of target nodes, given distance measurements between target nodes and between target nodes and a number of reference nodes at known positions. Based on a geometric interpretation, we formulate the positioning problem as an implicit convex feasibility problem in which some of the sets depend on the unknown target positions, and apply a parallel projection onto convex sets approach to estimate the unknown target node positions. The proposed technique is suitable for parallel implementation in which every target node in parallel can update its position and share the estimate of its location with other targets. We mathematically prove convergence of the proposed algorithm. Simulation results reveal enhanced performance for the proposed approach compared to available techniques based on projections, especially for sparse networks.
1309.2052
On the Strategic Allocation of Social Gratification
cs.SI physics.soc-ph
Members of social networks are given opportunities to bestow positive recognition upon one another by means of constructs such as "likes" and "retweets." Although recipients no doubt experience utility from these actions, one might question why these constructs with no intrinsic value for the sender are exchanged at all. Here we formulate a metric for the prestige of a member of a social network based on his or her place within the network and the rate at which "likes" are exchanged within his or her social circle. Simulation reveals that the 1% most strategically-optimized networks exchange likes at an average rate 23.5% higher than that of their random counterparts. This suggests that purely strategic agents, even with no concern for altruism or the general welfare, experience utility from giving social gratification. Further, we show that prestige-maximization creates a selective pressure for structural features associated with social networks including clustering and the small-world property.
1309.2057
Single image super resolution in spatial and wavelet domain
cs.CV
Recently single image super resolution is very important research area to generate high resolution image from given low resolution image. Algorithms of single image resolution are mainly based on wavelet domain and spatial domain. Filters support to model the regularity of natural images is exploited in wavelet domain while edges of images get sharp during up sampling in spatial domain. Here single image super resolution algorithm is presented which based on both spatial and wavelet domain and take the advantage of both. Algorithm is iterative and use back projection to minimize reconstruction error. Wavelet based denoising method is also introduced to remove noise.
1309.2074
Learning Transformations for Clustering and Classification
cs.CV cs.LG stat.ML
A low-rank transformation learning framework for subspace clustering and classification is here proposed. Many high-dimensional data, such as face images and motion sequences, approximately lie in a union of low-dimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature to partition such high-dimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a a maximally separated structure for data from different subspaces. In this way, we reduce variations within subspaces, and increase separation between subspaces for a more robust subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. Basic theoretical results here presented help to further support the underlying framework. To exploit the low-rank structures of the transformed subspaces, we further introduce a fast subspace clustering technique, which efficiently combines robust PCA with sparse modeling. When class labels are present at the training stage, we show this low-rank transformation framework also significantly enhances classification performance. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art methods for subspace clustering and classification.
1309.2077
An optimal fuzzy-PI force/motion controller to increase industrial robot autonomy
cs.RO
This paper presents a method for robot self-recognition and self-adaptation through the analysis of the contact between the robot end effector and its surrounding environment. Often, in off-line robot programming, the idealized robotic environment (the virtual one) does not reflect accurately the real one. In this situation, we are in the presence of a partially unknown environment (PUE). Thus, robotic systems must have some degree of autonomy to overcome this situation, especially when contact exists. The proposed force/motion control system has an external control loop based on forces and torques exerted on the robot end effector and an internal control loop based on robot motion. The external control loop is tested with an optimal proportional integrative (PI) and a fuzzy-PI controller. The system performance is validated with real-world experiments involving contact in PUEs.
1309.2078
Direct off-line robot programming via a common CAD package
cs.RO
This paper focuses on intuitive and direct off-line robot programming from a CAD drawing running on a common 3-D CAD package. It explores the most suitable way to represent robot motion in a CAD drawing, how to automatically extract such motion data from the drawing, make the mapping of data from the virtual (CAD model) to the real environment and the process of automatic generation of robot paths/programs. In summary, this study aims to present a novel CAD-based robot programming system accessible to anyone with basic knowledge of CAD and robotics. Experiments on different manipulation tasks show the effectiveness and versatility of the proposed approach.
1309.2079
CAD-based robot programming: The role of Fuzzy-PI force control in unstructured environments
cs.RO
More and more, new ways of interaction between humans and robots are desired, something that allow us to program a robot in an intuitive way, quickly and with a high-level of abstraction from the robot language. In this paper is presented a CAD-based system that allows users with basic skills in CAD and without skills in robot programming to generate robot programs from a CAD model of a robotic cell. When the CAD model reproduces exactly the real scenario, the system presents a satisfactory performance. On the contrary, when the CAD model does not reproduce exactly the real scenario or the calibration process is poorly done, we are dealing with uncertain (unstructured environment). In order to minimize or eliminate the previously mentioned problems, it was introduced sensory feedback (force and torque sensing) in the robotic framework. By controlling the end-effector pose and specifying its relationship to the interaction/contact forces, robot programmers can ensure that the robot maneuvers in an unstructured environment, damping possible impacts and also increasing the tolerance to positioning errors from the calibration process. Fuzzy-PI reasoning was used as a force control technique. The effectiveness of the proposed approach was evaluated in a series of experiments.
1309.2080
Structure Learning of Probabilistic Logic Programs by Searching the Clause Space
cs.LG cs.AI
Learning probabilistic logic programming languages is receiving an increasing attention and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both the structure and the parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for "Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space". It performs a beam search in the space of probabilistic clauses and a greedy search in the space of theories, using the log likelihood of the data as the guiding heuristics. To estimate the log likelihood SLIPCOVER performs Expectation Maximization with EMBLEM. The algorithm has been tested on five real world datasets and compared with SLIPCASE, SEM-CP-logic, Aleph and two algorithms for learning Markov Logic Networks (Learning using Structural Motifs (LSM) and ALEPH++ExactL1). SLIPCOVER achieves higher areas under the precision-recall and ROC curves in most cases.
1309.2081
Discretization and fitting of nominal data for autonomous robots
cs.RO
This paper presents methodologies to discretize nominal robot paths extracted from 3-D CAD drawings. Behind robot path discretization is the ability to have a robot adjusting the traversed paths so that the contact between robot tool and work-piece is properly maintained. In addition, a hybrid force/motion control system based on Fuzzy-PI control is proposed to adjust robot paths with external sensory feedback. All these capabilities allow to facilitate the robot programming process and to increase the robots autonomy.
1309.2084
Real-Time and Continuous Hand Gesture Spotting: an Approach Based on Artificial Neural Networks
cs.RO cs.CV
New and more natural human-robot interfaces are of crucial interest to the evolution of robotics. This paper addresses continuous and real-time hand gesture spotting, i.e., gesture segmentation plus gesture recognition. Gesture patterns are recognized by using artificial neural networks (ANNs) specifically adapted to the process of controlling an industrial robot. Since in continuous gesture recognition the communicative gestures appear intermittently with the noncommunicative, we are proposing a new architecture with two ANNs in series to recognize both kinds of gesture. A data glove is used as interface technology. Experimental results demonstrated that the proposed solution presents high recognition rates (over 99% for a library of ten gestures and over 96% for a library of thirty gestures), low training and learning time and a good capacity to generalize from particular situations.
1309.2086
High-level robot programming based on CAD: dealing with unpredictable environments
cs.RO
Purpose - The purpose of this paper is to present a CAD-based human-robot interface that allows non-expert users to teach a robot in a manner similar to that used by human beings to teach each other. Design/methodology/approach - Intuitive robot programming is achieved by using CAD drawings to generate robot programs off-line. Sensory feedback allows minimization of the effects of uncertainty, providing information to adjust the robot paths during robot operation. Findings - It was found that it is possible to generate a robot program from a common CAD drawing and run it without any major concerns about calibration or CAD model accuracy. Research limitations/implications - A limitation of the proposed system has to do with the fact that it was designed to be used for particular technological applications. Practical implications - Since most manufacturing companies have CAD packages in their facilities today, CAD-based robot programming may be a good option to program robots without the need for skilled robot programmers. Originality/value - The paper proposes a new CAD-based robot programming system. Robot programs are directly generated from a CAD drawing running on a commonly available 3D CAD package (Autodesk Inventor) and not from a commercial, computer aided robotics (CAR) software, making it a simple CAD integrated solution. This is a low-cost and low-setup time system where no advanced robot programming skills are required to operate it. In summary, robot programs are generated with a high-level of abstraction from the robot language.
1309.2089
A low-cost laser scanning solution for flexible robotic cells: spray coating
cs.RO
In this paper, an adaptive and low-cost robotic coating platform for small production series is presented. This new platform presents a flexible architecture that enables fast/automatic system adaptive behaviour without human intervention. The concept is based on contactless technology, using artificial vision and laser scanning to identify and characterize different workpieces travelling on a conveyor. Using laser triangulation, the workpieces are virtually reconstructed through a simplified cloud of three-dimensional (3D) points. From those reconstructed models, several algorithms are implemented to extract information about workpieces profile (pattern recognition), size, boundary and pose. Such information is then used to on-line adjust the base robot programmes. These robot programmes are off-line generated from a 3D computer-aided design model of each different workpiece profile. Finally, the robotic manipulator executes the coating process after its base programmes have been adjusted. This is a low-cost and fully autonomous system that allows adapting the robots behaviour to different manufacturing situations. It means that the robot is ready to work over any piece at any time, and thus, small production series can be reduced to as much as a one-object series. No skilled workers and large setup times are needed to operate it. Experimental results showed that this solution proved to be efficient and can be applied not only for spray coating purposes but also for many other industrial processes (automatic manipulation, pick-and-place, inspection, etc.).
1309.2090
Accelerometer-based control of an industrial robotic arm
cs.RO
Most of industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. In this paper is proposed an accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers. These accelerometers are attached to the human arms, capturing its behavior (gestures and postures). An Artificial Neural Network (ANN) trained with a back-propagation algorithm was used to recognize arm gestures and postures, which then will be used as input in the control of the robot. The aim is that the robot starts the movement almost at the same time as the user starts to perform a gesture or posture (low response time). The results show that the system allows the control of an industrial robot in an intuitive way. However, the achieved recognition rate of gestures and postures (92%) should be improved in future, keeping the compromise with the system response time (160 milliseconds). Finally, the results of some tests performed with an industrial robot are presented and discussed.
1309.2093
High-level programming and control for industrial robotics: using a hand-held accelerometer-based input device for gesture and posture recognition
cs.RO
Purpose - Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time-consuming task that requires some technical expertise, and hence new approaches to robot programming are required. The purpose of this paper is to present a robotic system that allows users to instruct and program a robot with a high-level of abstraction from the robot language. Design/methodology/approach - The paper presents in detail a robotic system that allows users, especially non-expert programmers, to instruct and program a robot just showing it what it should do, in an intuitive way. This is done using the two most natural human interfaces (gestures and speech), a force control system and several code generation techniques. Special attention will be given to the recognition of gestures, where the data extracted from a motion sensor (three-axis accelerometer) embedded in the Wii remote controller was used to capture human hand behaviours. Gestures (dynamic hand positions) as well as manual postures (static hand positions) are recognized using a statistical approach and artificial neural networks. Practical implications - The key contribution of this paper is that it offers a practical method to program robots by means of gestures and speech, improving work efficiency and saving time. Originality/value - This paper presents an alternative to the typical robot teaching process, extending the concept of human-robot interaction and co-worker scenario. Since most companies do not have engineering resources to make changes or add new functionalities to their robotic manufacturing systems, this system constitutes a major advantage for small- to medium-sized enterprises.
1309.2094
The Linearized Bregman Method via Split Feasibility Problems: Analysis and Generalizations
math.OC cs.CV cs.NA math.NA
The linearized Bregman method is a method to calculate sparse solutions to systems of linear equations. We formulate this problem as a split feasibility problem, propose an algorithmic framework based on Bregman projections and prove a general convergence result for this framework. Convergence of the linearized Bregman method will be obtained as a special case. Our approach also allows for several generalizations such as other objective functions, incremental iterations, incorporation of non-gaussian noise models or box constraints.
1309.2143
Secure Layered Transmission in Multicast Systems with Wireless Information and Power Transfer
cs.IT math.IT
This paper considers downlink multicast transmit beamforming for secure layered transmission systems with wireless simultaneous information and power transfer. We study the power allocation algorithm design for minimizing the total transmit power in the presence of passive eavesdroppers and energy harvesting receivers. The algorithm design is formulated as a non-convex optimization problem. Our problem formulation promotes the dual use of energy signals in providing secure communication and facilitating efficient energy transfer. Besides, we take into account a minimum required power for energy harvesting at the idle receivers and heterogeneous quality of service (QoS) requirements for the multicast video receivers. In light of the intractability of the problem, we reformulate the considered problem by replacing a non-convex probabilistic constraint with a convex deterministic constraint. Then, a semidefinite programming relaxation (SDR) approach is adopted to obtain an upper solution for the reformulated problem. Subsequently, sufficient conditions for the global optimal solution of the reformulated problem are revealed. Furthermore, we propose two suboptimal power allocation schemes based on the upper bound solution. Simulation results demonstrate the excellent performance and significant transmit power savings achieved by the proposed schemes compared to isotropic energy signal generation.
1309.2168
Large-scale optimization with the primal-dual column generation method
math.OC cs.LG cs.NA
The primal-dual column generation method (PDCGM) is a general-purpose column generation technique that relies on the primal-dual interior point method to solve the restricted master problems. The use of this interior point method variant allows to obtain suboptimal and well-centered dual solutions which naturally stabilizes the column generation. As recently presented in the literature, reductions in the number of calls to the oracle and in the CPU times are typically observed when compared to the standard column generation, which relies on extreme optimal dual solutions. However, these results are based on relatively small problems obtained from linear relaxations of combinatorial applications. In this paper, we investigate the behaviour of the PDCGM in a broader context, namely when solving large-scale convex optimization problems. We have selected applications that arise in important real-life contexts such as data analysis (multiple kernel learning problem), decision-making under uncertainty (two-stage stochastic programming problems) and telecommunication and transportation networks (multicommodity network flow problem). In the numerical experiments, we use publicly available benchmark instances to compare the performance of the PDCGM against recent results for different methods presented in the literature, which were the best available results to date. The analysis of these results suggests that the PDCGM offers an attractive alternative over specialized methods since it remains competitive in terms of number of iterations and CPU times even for large-scale optimization problems.
1309.2172
Average resistance of toroidal graphs
math.OC cs.SY
The average effective resistance of a graph is a relevant performance index in many applications, including distributed estimation and control of network systems. In this paper, we study how the average resistance depends on the graph topology and specifically on the dimension of the graph. We concentrate on $d$-dimensional toroidal grids and we exploit the connection between resistance and Laplacian eigenvalues. Our analysis provides tight estimates of the average resistance, which are key to study its asymptotic behavior when the number of nodes grows to infinity. In dimension two, the average resistance diverges: in this case, we are able to capture its rate of growth when the sides of the grid grow at different rates. In higher dimensions, the average resistance is bounded uniformly in the number of nodes: in this case, we conjecture that its value is of order $1/d$ for large $d$. We prove this fact for hypercubes and when the side lengths go to infinity.
1309.2175
Cascading failures in spatially-embedded random networks
physics.soc-ph cond-mat.stat-mech cs.SI
Cascading failures constitute an important vulnerability of interconnected systems. Here we focus on the study of such failures on networks in which the connectivity of nodes is constrained by geographical distance. Specifically, we use random geometric graphs as representative examples of such spatial networks, and study the properties of cascading failures on them in the presence of distributed flow. The key finding of this study is that the process of cascading failures is non-self-averaging on spatial networks, and thus, aggregate inferences made from analyzing an ensemble of such networks lead to incorrect conclusions when applied to a single network, no matter how large the network is. We demonstrate that this lack of self-averaging disappears with the introduction of a small fraction of long-range links into the network. We simulate the well studied preemptive node removal strategy for cascade mitigation and show that it is largely ineffective in the case of spatial networks. We introduce an altruistic strategy designed to limit the loss of network nodes in the event of a cascade triggering failure and show that it performs better than the preemptive strategy. Finally, we consider a real-world spatial network viz. a European power transmission network and validate that our findings from the study of random geometric graphs are also borne out by simulations of cascading failures on the empirical network.
1309.2183
Application of Artificial Neural Networks in Estimating Participation in Elections
cs.NE cs.CY
It is approved that artificial neural networks can be considerable effective in anticipating and analyzing flows in which traditional methods and statics are not able to solve. in this article, by using two-layer feedforward network with tan-sigmoid transmission function in input and output layers, we can anticipate participation rate of public in kohgiloye and boyerahmad province in future presidential election of islamic republic of iran with 91% accuracy. the assessment standards of participation such as confusion matrix and roc diagrams have been approved our claims.
1309.2199
Distinguishing Topical and Social Groups Based on Common Identity and Bond Theory
cs.SI cs.CY physics.soc-ph
Social groups play a crucial role in social media platforms because they form the basis for user participation and engagement. Groups are created explicitly by members of the community, but also form organically as members interact. Due to their importance, they have been studied widely (e.g., community detection, evolution, activity, etc.). One of the key questions for understanding how such groups evolve is whether there are different types of groups and how they differ. In Sociology, theories have been proposed to help explain how such groups form. In particular, the common identity and common bond theory states that people join groups based on identity (i.e., interest in the topics discussed) or bond attachment (i.e., social relationships). The theory has been applied qualitatively to small groups to classify them as either topical or social. We use the identity and bond theory to define a set of features to classify groups into those two categories. Using a dataset from Flickr, we extract user-defined groups and automatically-detected groups, obtained from a community detection algorithm. We discuss the process of manual labeling of groups into social or topical and present results of predicting the group label based on the defined features. We directly validate the predictions of the theory showing that the metrics are able to forecast the group type with high accuracy. In addition, we present a comparison between declared and detected groups along topicality and sociality dimensions.
1309.2236
The Cost of an Epidemic over a Complex Network: A Random Matrix Approach
cs.SI physics.soc-ph
In this paper we quantify the total economic impact of an epidemic over a complex network using tools from random matrix theory. Incorporating the direct and indirect costs of infection, we calculate the disease cost in the large graph limit for an SIS (Susceptible - Infected - Susceptible) infection process. We also give an upper bound on this cost for arbitrary finite graphs and illustrate both calculated costs using extensive simulations on random and real-world networks. We extend these calculations by considering the total social cost of an epidemic, accounting for both the immunization and disease costs for various immunization strategies and determining the optimal immunization. Our work focuses on the transient behavior of the epidemic, in contrast to previous research, which typically focuses on determining the steady-state system equilibrium.
1309.2238
Probability and the Classical/Quantum Divide
quant-ph cs.IT math.IT
This paper considers the problem of distinguishing between classical and quantum domains in macroscopic phenomena using tests based on probability and it presents a condition on the ratios of the outcomes being the same (Ps) to being different (Pn). Given three events, Ps/Pn for the classical case, where there are no 3-way coincidences, is one-half whereas for the quantum state it is one-third. For non-maximally entangled objects we find that so long as r < 5.83, we can separate them from classical objects using a probability test. For maximally entangled particles (r = 1), we propose that the value of 5/12 be used for Ps/Pn to separate classical and quantum states when no other information is available and measurements are noisy.
1309.2240
Contour Manifolds and Optimal Transport
math.DG cs.CV
Describing shapes by suitable measures in object segmentation, as proposed in [24], allows to combine the advantages of the representations as parametrized contours and indicator functions. The pseudo-Riemannian structure of optimal transport can be used to model shapes in ways similar as with contours, while the Kantorovich functional enables the application of convex optimization methods for global optimality of the segmentation functional. In this paper we provide a mathematical study of the shape measure representation and its relation to the contour description. In particular we show that the pseudo-Riemannian structure of optimal transport, when restricted to the set of shape measures, yields a manifold which is diffeomorphic to the manifold of closed contours. A discussion of the metric induced by optimal transport and the corresponding geodesic equation is given.
1309.2250
A Search Algorithm to Find Multiple Sets of One Dimensional Unipolar (Optical) Orthogonal Codes with Same Code-length and Low Weight
cs.IT math.IT
This paper describes a search algorithm to find multiple sets of one dimensional unipolar (optical) orthogonal codes characterized by parameters, binary code sequence of length (n bits) and weight w (number of bit 1s in the sequence) as well as auto-correlation and cross-correlation constraint respectively for the codes within a set. For a given code length n and code weight w all possible difference sets, with auto-correlation constraints lying from 1 to w-1 can be designed with distinct code serial number. For given cross-correlation constraint from 1 to w-1 Multiple sets can be searched out of the codes with auto-correlation constraints less than or equal to given auto-correlation constraint using proposed algorithm. The searched multiple sets can be sorted as having number of codes not less than the upper bound of the sets given by Johnson bound. These one dimensional unipolar orthogonal codes have their application in incoherent optical code division multiple access systems.
1309.2254
Design of Two Dimensional Unipolar (Optical) Orthogonal Codes Through One Dimensional Unipolar (Optical) Orthogonal Codes
cs.IT math.IT
In this paper, an algorithm for construction of multiple sets of two dimensional (2D) or matrix unipolar (optical) orthogonal codes has been proposed. Representations of these 2D codes in difference of positions representation (DoPR) have also been discussed along-with conventional weighted positions representation (WPR) of the code. This paper also proposes less complex methods for calculation of auto-correlation as well as cross-correlation constraints within set of matrix codes. The multiple sets of matrix codes provide flexibility for selection of optical orthogonal codes set in wavelength-hopping time-spreading (WHTS) optical code division multiple access (CDMA) system.
1309.2304
Information Theory and Moduli of Riemann Surfaces
math.AG cs.IT math.IT
One interpretation of Torelli's Theorem, which asserts that a compact Riemann Surface $X$ of genus $g > 1$ is determined by the $g(g+1)/2$ entries of the period matrix, is that the period matrix is a message about $X$. Since this message depends on only $3g-3$ moduli, it is sparse, or at least approximately so, in the sense of information theory. Thus, methods from information theory may be useful in reconstructing the period matrix, and hence the Riemann surface, from a small subset of the periods. The results here show that, with high probability, any set of $3g-3$ periods form moduli for the surface.
1309.2343
A Finite-Blocklength Perspective on Gaussian Multi-Access Channels
cs.IT math.IT
Motivated by the growing application of wireless multi-access networks with stringent delay constraints, we investigate the Gaussian multiple access channel (MAC) in the finite blocklength regime. Building upon information spectrum concepts, we develop several non-asymptotic inner bounds on channel coding rates over the Gaussian MAC with a given finite blocklength, positive average error probability, and maximal power constraints. Employing Central Limit Theorem (CLT) approximations, we also obtain achievable second-order coding rates for the Gaussian MAC based on an explicit expression for its dispersion matrix. We observe that, unlike the pentagon shape of the asymptotic capacity region, the second-order region has a curved shape with no sharp corners. A main emphasis of the paper is to provide a new perspective on the procedure of handling input cost constraints for tight achievability proofs. Contrary to the complicated achievability techniques in the literature, we show that with a proper choice of input distribution, tight bounds can be achieved via the standard random coding argument and a modified typicality decoding. In particular, we prove that codebooks generated randomly according to independent uniform distributions on the respective "power shells" perform far better than both independent and identically distributed (i.i.d.) Gaussian inputs and TDMA with power control. Interestingly, analogous to an error exponent result of Gallager, the resulting achievable region lies roughly halfway between that of the i.i.d. Gaussian inputs and that of a hypothetical "sum-power shell" input. However, dealing with such a non-i.i.d. input requires additional analysis such as a new change of measure technique and application of a Berry-Esseen CLT for functions of random variables.
1309.2350
Exponentially Fast Parameter Estimation in Networks Using Distributed Dual Averaging
cs.LG cs.SI math.OC stat.ML
In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov's dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.
1309.2351
Elementos de ingenier\'ia de explotaci\'on de la informaci\'on aplicados a la investigaci\'on tributaria fiscal
cs.AI
By introducing elements of information mining to tax analysis, by means of data mining software and advanced computational concepts of artificial intelligence, the problem of tax evader's crime against public property has been addressed. Through an empirical approach from a hypothetical case of use, induction algorithms, neural networks and bayesian networks are applied to determine the feasibility of its heuristic application by the tax public administrator. Different strategies are explored to facilitate the work of local and regional federal tax inspectors, considering their limited computational capabilities, but equally effective for those social scientist committed to handcrafting tax research. ----- Apresentando a introdu\c{c}\~ao de elementos de explora\c{c}\~ao de informa\c{c}\~oes para an\'alise fiscal, por meio de software de minera\c{c}\~ao de dados e conceitos avan\c{c}ados computacionais de intelig\^encia artificial, foi abordado o problema do crime de sonegador fiscal contra o patrim\^onio p\'ublico. Atrav\'es de uma abordagem emp\'irica a partir de um caso hipot\'etico de uso, os algoritmos de indu\c{c}\~ao, redes neurais e redes bayesianas s\~ao aplicados para determinar a viabilidade de sua aplica\c{c}\~ao heur\'istica pelo administrador p\'ublico tribut\'ario. Diferentes estrat\'egias s\~ao exploradas para facilitar o trabalho dos inspectores tribut\'arios federais locais e regionais, tendo em conta as suas capacidades computacionais limitados, mas igualmente eficaz para aqueles cientista social comprometido com a investiga\c{c}\~ao fiscal.
1309.2355
An Optimal Load-Frequency Control Method for Inverter-Based Renewable Energy Transmission
cs.SY
The frequency droop response of conventional turbine driven synchronous generators with respect to load increases is normally used in order to have stable operating characteristics for multiple generators operating in parallel over large geographical regions. This presents a challenge for renewable energy sources that interface to the transmission grid through static inverters that do not exhibit an intrinsic frequency droop characteristic. This paper provides a technique for designing optimal load frequency controllers as transmission line inverters fed from renewable energy sources that allows for fast dynamic response due to variable solar and wind conditions while maintain stability to interconnected synchronous generators. A control technique based on LQG optimization theory is presented. Detailed analysis of a three-area system in a region of mixed wind and solar photovoltaic sources is modeled in a manner that confirms the effectiveness of the disclosed load-frequency control method.
1309.2371
Performance analysis of modified algorithm for finding multilevel association rules
cs.DB
Multilevel association rules explore the concept hierarchy at multiple levels which provides more specific information. Apriori algorithm explores the single level association rules. Many implementations are available of Apriori algorithm. Fast Apriori implementation is modified to develop new algorithm for finding multilevel association rules. In this study the performance of this new algorithm is analyzed in terms of running time in seconds.
1309.2375
Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization
stat.ML cs.LG cs.NA stat.CO
We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.
1309.2388
Minimizing Finite Sums with the Stochastic Average Gradient
math.OC cs.LG stat.CO stat.ML
We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1/k^{1/2}) to O(1/k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for p \textless{} 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.
1309.2402
Anger is More Influential Than Joy: Sentiment Correlation in Weibo
cs.SI physics.soc-ph
Recent years have witnessed the tremendous growth of the online social media. In China, Weibo, a Twitter-like service, has attracted more than 500 million users in less than four years. Connected by online social ties, different users influence each other emotionally. We find the correlation of anger among users is significantly higher than that of joy, which indicates that angry emotion could spread more quickly and broadly in the network. While the correlation of sadness is surprisingly low and highly fluctuated. Moreover, there is a stronger sentiment correlation between a pair of users if they share more interactions. And users with larger number of friends posses more significant sentiment influence to their neighborhoods. Our findings could provide insights for modeling sentiment influence and propagation in online social networks.
1309.2471
Implementation of nlization framework for verbs, pronouns and determiners with eugene
cs.CL
UNL system is designed and implemented by a nonprofit organization, UNDL Foundation at Geneva in 1999. UNL applications are application softwares that allow end users to accomplish natural language tasks, such as translating, summarizing, retrieving or extracting information, etc. Two major web based application softwares are Interactive ANalyzer (IAN), which is a natural language analysis system. It represents natural language sentences as semantic networks in the UNL format. Other application software is dEep-to-sUrface GENErator (EUGENE), which is an open-source interactive NLizer. It generates natural language sentences out of semantic networks represented in the UNL format. In this paper, NLization framework with EUGENE is focused, while using UNL system for accomplishing the task of machine translation. In whole NLization process, EUGENE takes a UNL input and delivers an output in natural language without any human intervention. It is language-independent and has to be parametrized to the natural language input through a dictionary and a grammar, provided as separate interpretable files. In this paper, it is explained that how UNL input is syntactically and semantically analyzed with the UNL-NL T-Grammar for NLization of UNL sentences involving verbs, pronouns and determiners for Punjabi natural language.
1309.2473
Interference Alignment with Diversity for the $2 \times 2$ $X$-Network with three antennas
cs.IT math.IT
Interference alignment is known to achieve the maximum sum DoF of $\frac{4M}{3}$ in the $2 \times 2$ $X$-Network (i.e., two-transmitter (Tx) two-receiver (Rx) $X$-Network) with $M$ antennas at each node, as demonstrated by Jafar and Shamai. Recently, an Alamouti code based transmission scheme, which we call the Li-Jafarkhani-Jafar (LJJ) scheme, was proposed for the $2 \times 2$ $X$-Network with two antennas at each node. This scheme achieves a sum degrees of freedom (DoF) of $\frac{8}{3}$ and also a diversity gain of two when fixed finite constellations are employed at each Tx. In the LJJ scheme, each Tx required the knowledge of only its own channel unlike the Jafar-Shamai scheme which required global CSIT to achieve the maximum possible sum DoF of $\frac{8}{3}$. Bit error rate (BER) is an important performance metric when the coding length is finite. This work first proposes a new STBC for a three transmit antenna single user MIMO system. Building on this STBC, we extend the LJJ scheme to the $2 \times 2$ $X$-Network with three antennas at each node. Local channel knowledge is assumed at each Tx. It is shown that the proposed scheme achieves the maximum possible sum DoF of 4. A diversity gain of 3 is also guaranteed when fixed finite constellation inputs are used.
1309.2502
Distributed Maximum Likelihood Sensor Network Localization
cs.IT math.IT
We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements. We derive a computational efficient edge-based version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edge-based convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation error-resilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for large-scale networks.
1309.2505
Compressed Sensing for Block-Sparse Smooth Signals
stat.ML cs.IT math.IT math.ST stat.TH
We present reconstruction algorithms for smooth signals with block sparsity from their compressed measurements. We tackle the issue of varying group size via group-sparse least absolute shrinkage selection operator (LASSO) as well as via latent group LASSO regularizations. We achieve smoothness in the signal via fusion. We develop low-complexity solvers for our proposed formulations through the alternating direction method of multipliers.
1309.2506
A multi-stream hmm approach to offline handwritten arabic word recognition
cs.CV
In This paper we presented new approach for cursive Arabic text recognition system. The objective is to propose methodology analytical offline recognition of handwritten Arabic for rapid implementation. The first part in the writing recognition system is the preprocessing phase is the preprocessing phase to prepare the data was introduces and extracts a set of simple statistical features by two methods : from a window which is sliding long that text line the right to left and the approach VH2D (consists in projecting every character on the abscissa, on the ordinate and the diagonals 45{\deg} and 135{\deg}) . It then injects the resulting feature vectors to Hidden Markov Model (HMM) and combined the two HMM by multi-stream approach.
1309.2517
Forecasting Stock Time-Series using Data Approximation and Pattern Sequence Similarity
cs.DB
Time series analysis is the process of building a model using statistical techniques to represent characteristics of time series data. Processing and forecasting huge time series data is a challenging task. This paper presents Approximation and Prediction of Stock Time-series data (APST), which is a two step approach to predict the direction of change of stock price indices. First, performs data approximation by using the technique called Multilevel Segment Mean (MSM). In second phase, prediction is performed for the approximated data using Euclidian distance and Nearest-Neighbour technique. The computational cost of data approximation is O(n ni) and computational cost of prediction task is O(m |NN|). Thus, the accuracy and the time required for prediction in the proposed method is comparatively efficient than the existing Label Based Forecasting (LBF) method [1].
1309.2558
On differential passivity of physical systems
cs.SY math.DS
Differential passivity is a property that allows to check with a pointwise criterion that a system is incrementally passive, a property that is relevant to study interconnected systems in the context of regulation, synchronization, and estimation. The paper investigates how restrictive is the property, focusing on a class of open gradient systems encountered in the coenergy modeling framework of physical systems, in particular the Brayton-Moser formalism for nonlinear electrical circuits.
1309.2574
Randomized Consensus with Attractive and Repulsive Links
cs.SY cs.MA math.OC
We study convergence properties of a randomized consensus algorithm over a graph with both attractive and repulsive links. At each time instant, a node is randomly selected to interact with a random neighbor. Depending on if the link between the two nodes belongs to a given subgraph of attractive or repulsive links, the node update follows a standard attractive weighted average or a repulsive weighted average, respectively. The repulsive update has the opposite sign of the standard consensus update. In this way, it counteracts the consensus formation and can be seen as a model of link faults or malicious attacks in a communication network, or the impact of trust and antagonism in a social network. Various probabilistic convergence and divergence conditions are established. A threshold condition for the strength of the repulsive action is given for convergence in expectation: when the repulsive weight crosses this threshold value, the algorithm transits from convergence to divergence. An explicit value of the threshold is derived for classes of attractive and repulsive graphs. The results show that a single repulsive link can sometimes drastically change the behavior of the consensus algorithm. They also explicitly show how the robustness of the consensus algorithm depends on the size and other properties of the graphs.
1309.2593
Maximizing submodular functions using probabilistic graphical models
cs.LG math.OC
We consider the problem of maximizing submodular functions; while this problem is known to be NP-hard, several numerically efficient local search techniques with approximation guarantees are available. In this paper, we propose a novel convex relaxation which is based on the relationship between submodular functions, entropies and probabilistic graphical models. In a graphical model, the entropy of the joint distribution decomposes as a sum of marginal entropies of subsets of variables; moreover, for any distribution, the entropy of the closest distribution factorizing in the graphical model provides an bound on the entropy. For directed graphical models, this last property turns out to be a direct consequence of the submodularity of the entropy function, and allows the generalization of graphical-model-based upper bounds to any submodular functions. These upper bounds may then be jointly maximized with respect to a set, while minimized with respect to the graph, leading to a convex variational inference scheme for maximizing submodular functions, based on outer approximations of the marginal polytope and maximum likelihood bounded treewidth structures. By considering graphs of increasing treewidths, we may then explore the trade-off between computational complexity and tightness of the relaxation. We also present extensions to constrained problems and maximizing the difference of submodular functions, which include all possible set functions.
1309.2597
Mine Blood Donors Information through Improved K-Means Clustering
cs.DB
The number of accidents and health diseases which are increasing at an alarming rate are resulting in a huge increase in the demand for blood. There is a necessity for the organized analysis of the blood donor database or blood banks repositories. Clustering analysis is one of the data mining applications and K-means clustering algorithm is the fundamental algorithm for modern clustering techniques. K-means clustering algorithm is traditional approach and iterative algorithm. At every iteration, it attempts to find the distance from the centroid of each cluster to each and every data point. This paper gives the improvement to the original k-means algorithm by improving the initial centroids with distribution of data. Results and discussions show that improved K-means algorithm produces accurate clusters in less computation time to find the donors information.
1309.2643
Noisy Interactive Quantum Communication
quant-ph cs.CC cs.IT math.IT
We study the problem of simulating protocols in a quantum communication setting over noisy channels. This problem falls at the intersection of quantum information theory and quantum communication complexity, and it will be of importance for eventual real-world applications of interactive quantum protocols, which can be proved to have exponentially lower communication costs than their classical counterparts for some problems. These are the first results concerning the quantum version of this problem, originally studied by Schulman in a classical setting (FOCS '92, STOC '93). We simulate a length $N$ quantum communication protocol by a length $O(N)$ protocol with arbitrarily small error. Under adversarial noise, our strategy can withstand, for arbitrarily small $\epsilon > 0$, error rates as high as $1/2 -\epsilon$ when parties pre-share perfect entanglement, but the classical channel is noisy. We show that this is optimal. We provide extension of these results in several other models of communication, including when also the entanglement is noisy, and when there is no pre-shared entanglement but communication is quantum and noisy. We also study the case of random noise, for which we provide simulation protocols with positive communication rates and no pre-shared entanglement over some quantum channels with quantum capacity $C_Q=0$, proving that $C_Q$ is in general not the right characterization of a channel's capacity for interactive quantum communication. Our results are stated for a general quantum communication protocol in which Alice and Bob collaborate, and these results hold in particular in the quantum communication complexity settings of the Yao and Cleve--Buhrman models.
1309.2648
Resurrecting My Revolution: Using Social Link Neighborhood in Bringing Context to the Disappearing Web
cs.IR cs.DL
In previous work we reported that resources linked in tweets disappeared at the rate of 11% in the first year followed by 7.3% each year afterwards. We also found that in the first year 6.7%, and 14.6% in each subsequent year, of the resources were archived in public web archives. In this paper we revisit the same dataset of tweets and find that our prior model still holds and the calculated error for estimating percentages missing was about 4%, but we found the rate of archiving produced a higher error of about 11.5%. We also discovered that resources have disappeared from the archives themselves (7.89%) as well as reappeared on the live web after being declared missing (6.54%). We have also tested the availability of the tweets themselves and found that 10.34% have disappeared from the live web. To mitigate the loss of resources on the live web, we propose the use of a "tweet signature". Using the Topsy API, we extract the top five most frequent terms from the union of all tweets about a resource, and use these five terms as a query to Google. We found that using tweet signatures results in discovering replacement resources with 70+% textual similarity to the missing resource 41% of the time.
1309.2655
First-Order Provenance Games
cs.DB cs.LO
We propose a new model of provenance, based on a game-theoretic approach to query evaluation. First, we study games G in their own right, and ask how to explain that a position x in G is won, lost, or drawn. The resulting notion of game provenance is closely related to winning strategies, and excludes from provenance all "bad moves", i.e., those which unnecessarily allow the opponent to improve the outcome of a play. In this way, the value of a position is determined by its game provenance. We then define provenance games by viewing the evaluation of a first-order query as a game between two players who argue whether a tuple is in the query answer. For RA+ queries, we show that game provenance is equivalent to the most general semiring of provenance polynomials N[X]. Variants of our game yield other known semirings. However, unlike semiring provenance, game provenance also provides a "built-in" way to handle negation and thus to answer why-not questions: In (provenance) games, the reason why x is not won, is the same as why x is lost or drawn (the latter is possible for games with draws). Since first-order provenance games are draw-free, they yield a new provenance model that combines how- and why-not provenance.
1309.2660
Data-Driven Grasp Synthesis - A Survey
cs.RO
We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.
1309.2675
A Brief Study of Open Source Graph Databases
cs.DB cs.DS cs.SE
With the proliferation of large irregular sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query and compute on the topological structure of these relationships represented as set(s) of edges between set(s) of vertices. To store and process Facebook-scale datasets, they must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we explore a variety of graph analysis and storage platforms. We compare their capabil- ities, interfaces, and performance by implementing and computing a set of real-world graph algorithms on synthetic graphs with up to 256 million edges. In the spirit of full disclosure, several authors are affiliated with the development of STINGER.
1309.2676
Greedy Signal Space Methods for incoherence and beyond
math.NA cs.IT math.IT
Compressive sampling (CoSa) has provided many methods for signal recovery of signals compressible with respect to an orthonormal basis. However, modern applications have sparked the emergence of approaches for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary. Recently, several "signal-space" greedy methods have been proposed to address signal recovery in this setting. However, such methods inherently rely on the existence of fast and accurate projections which allow one to identify the most relevant atoms in a dictionary for any given signal, up to a very strict accuracy. When the dictionary is highly overcomplete, no such projections are currently known; the requirements on such projections do not even hold for incoherent or well-behaved dictionaries. In this work, we provide an alternate analysis for signal space greedy methods which enforce assumptions on these projections which hold in several settings including those when the dictionary is incoherent or structurally coherent. These results align more closely with traditional results in the standard CoSa literature and improve upon previous work in the signal space setting.
1309.2677
Language change in a multiple group society
physics.soc-ph cs.SI
The processes leading to change in languages are manifold. In order to reduce ambiguity in the transmission of information, agreement on a set of conventions for recurring problems is favored. In addition to that, speakers tend to use particular linguistic variants associated with the social groups they identify with. The influence of other groups propagating across the speech community as new variant forms sustains the competition between linguistic variants. With the utterance selection model, an evolutionary description of language change, Baxter et al. [Phys. Rev. E 73, 046118 (2006)] have provided a mathematical formulation of the interactions inside a group of speakers, exploring the mechanisms that lead to or inhibit the fixation of linguistic variants. In this paper, we take the utterance selection model one step further by describing a speech community consisting of multiple interacting groups. Tuning the interaction strength between groups allows us to gain deeper understanding about the way in which linguistic variants propagate and how their distribution depends on the group partitioning. Both for the group size and the number of groups we find scaling behaviors with two asymptotic regimes. If groups are strongly connected, the dynamics is that of the standard utterance selection model, whereas if their coupling is weak, the magnitude of the latter along with the system size governs the way consensus is reached. Furthermore, we find that a high influence of the interlocutor on a speaker's utterances can act as a counterweight to group segregation.
1309.2679
Caracterizando la Web Chilena
cs.SI
This article presents a characterization of the web space from Chile in 2007. The characterization shows distributions of sites and domains, analysis of document content and server configuration. In addition, the network structure of the chilean Web is analyzed, determining components based on hyperlink structure at the document and site levels. Original Abstract: En este art\'iculo se muestra una caracterizaci\'on del espacio web de Chile para el a\~no 2007. Se muestran distribuciones de sitios y dominios, caracterizaci\'on del contenido en base a tipos de documento, asi como configuraci\'on de los servidores. Se estudia la estructura de la red creada mediante hiperv\'inculos en los documentos y c\'omo las diferentes componentes de esta estructura var\'ian cuando los hiperv\'inculos son agregados a nivel de sitios.
1309.2687
CrowdPlanner: A Crowd-Based Route Recommendation System
cs.DB
CrowdPlanner -- a novel crowd-based route recommendation system has been developed, which requests human workers to evaluate candidates routes recommended by different sources and methods, and determine the best route based on the feedbacks of these workers. Our system addresses two critical issues in its core components: a) task generation component generates a series of informative and concise questions with optimized ordering for a given candidate route set so that workers feel comfortable and easy to answer; and b) worker selection component utilizes a set of selection criteria and an efficient algorithm to find the most eligible workers to answer the questions with high accuracy.
1309.2690
Energt Efficient MAC Protocols for Wireless Sensor Network: A Survey
cs.IT cs.NI math.IT
Wireless Sensor Network (WSN) is an attractive choice for a variety of applications as no wired infrastructure is needed. Other wireless networks are not as energy constrained as WSNs, because they may be plugged into the mains supply or equipped with batteries that are rechargeable and replaceable. Among others, one of the main sources of energy depletion in WSN is communications controlled by the Medium Access Control (MAC) protocols. An extensive survey of energy efficient MAC protocols is presented in this article. We categorise WSN MAC protocols in the following categories: controlled access (CA), random access (RA), slotted protocols (SP) and hybrid protocols (HP). We further discuss how energy efficient MAC protocols have developed from fixed sleep/wake cycles through adaptive to dynamic cycles, thus becoming more responsive to traffic load variations. Finally we present open research questions on MAC layer design for WSNs in terms of energy efficiency
1309.2693
A Conflict-Based Path-Generation Heuristic for Evacuation Planning
cs.AI math.OC
Evacuation planning and scheduling is a critical aspect of disaster management and national security applications. This paper proposes a conflict-based path-generation approach for evacuation planning. Its key idea is to generate evacuation routes lazily for evacuated areas and to optimize the evacuation over these routes in a master problem. Each new path is generated to remedy conflicts in the evacuation and adds new columns and a new row in the master problem. The algorithm is applied to massive flood scenarios in the Hawkesbury-Nepean river (West Sydney, Australia) which require evacuating in the order of 70,000 persons. The proposed approach reduces the number of variables from 4,500,000 in a Mixed Integer Programming (MIP) formulation to 30,000 in the case study. With this approach, realistic evacuations scenarios can be solved near-optimally in real time, supporting both evacuation planning in strategic, tactical, and operational environments.
1309.2712
On Block Security of Regenerating Codes at the MBR Point for Distributed Storage Systems
cs.IT math.CO math.IT
A passive adversary can eavesdrop stored content or downloaded content of some storage nodes, in order to learn illegally about the file stored across a distributed storage system (DSS). Previous work in the literature focuses on code constructions that trade storage capacity for perfect security. In other words, by decreasing the amount of original data that it can store, the system can guarantee that the adversary, which eavesdrops up to a certain number of storage nodes, obtains no information (in Shannon's sense) about the original data. In this work we introduce the concept of block security for DSS and investigate minimum bandwidth regenerating (MBR) codes that are block secure against adversaries of varied eavesdropping strengths. Such MBR codes guarantee that no information about any group of original data units up to a certain size is revealed, without sacrificing the storage capacity of the system. The size of such secure groups varies according to the number of nodes that the adversary can eavesdrop. We show that code constructions based on Cauchy matrices provide block security. The opposite conclusion is drawn for codes based on Vandermonde matrices.
1309.2721
Asymptotically Optimal Beamforming for Video Streaming in Multi-Antenna Interference Networks
cs.IT math.IT
In this paper, we consider queue-aware beamforming control for video streaming applications in multi-antenna interference network. Using heavy traffic approximation technique, we first derive the diffusion limit for the discrete time queuing system. Based on the diffusion limit, we formulate an infinite horizon ergodic control problem to minimize the average power costs of the base stations subject to the constraints on the playback interruption costs and buffer overflow costs of the mobile users. To deal with the queue coupling challenge, we utilize the weak interference coupling property in the network to derive a closed-form approximate value function of the optimality equation as well as the associated error bound using perturbation analysis. Based on the closed-form approximate value function, we propose a low complexity queue-aware beamforming control algorithm, which is asymptotically optimal for sufficiently small cross-channel path gain. Finally, the proposed scheme is compared with various baselines through simulations and it is shown that significant performance gain can be achieved.
1309.2747
Approximate Counting CSP Solutions Using Partition Function
cs.AI
We propose a new approximate method for counting the number of the solutions for constraint satisfaction problem (CSP). The method derives from the partition function based on introducing the free energy and capturing the relationship of probabilities of variables and constraints, which requires the marginal probabilities. It firstly obtains the marginal probabilities using the belief propagation, and then computes the number of solutions according to the partition function. This allows us to directly plug the marginal probabilities into the partition function and efficiently count the number of solutions for CSP. The experimental results show that our method can solve both random problems and structural problems efficiently.
1309.2752
Robust Periocular Recognition By Fusing Sparse Representations of Color and Geometry Information
cs.CV
In this paper, we propose a re-weighted elastic net (REN) model for biometric recognition. The new model is applied to data separated into geometric and color spatial components. The geometric information is extracted using a fast cartoon - texture decomposition model based on a dual formulation of the total variation norm allowing us to carry information about the overall geometry of images. Color components are defined using linear and nonlinear color spaces, namely the red-green-blue (RGB), chromaticity-brightness (CB) and hue-saturation-value (HSV). Next, according to a Bayesian fusion-scheme, sparse representations for classification purposes are obtained. The scheme is numerically solved using a gradient projection (GP) algorithm. In the empirical validation of the proposed model, we have chosen the periocular region, which is an emerging trait known for its robustness against low quality data. Our results were obtained in the publicly available UBIRIS.v2 data set and show consistent improvements in recognition effectiveness when compared to related state-of-the-art techniques.
1309.2765
Enhancements of Multi-class Support Vector Machine Construction from Binary Learners using Generalization Performance
cs.LG stat.ML
We propose several novel methods for enhancing the multi-class SVMs by applying the generalization performance of binary classifiers as the core idea. This concept will be applied on the existing algorithms, i.e., the Decision Directed Acyclic Graph (DDAG), the Adaptive Directed Acyclic Graphs (ADAG), and Max Wins. Although in the previous approaches there have been many attempts to use some information such as the margin size and the number of support vectors as performance estimators for binary SVMs, they may not accurately reflect the actual performance of the binary SVMs. We show that the generalization ability evaluated via a cross-validation mechanism is more suitable to directly extract the actual performance of binary SVMs. Our methods are built around this performance measure, and each of them is crafted to overcome the weakness of the previous algorithm. The proposed methods include the Reordering Adaptive Directed Acyclic Graph (RADAG), Strong Elimination of the classifiers (SE), Weak Elimination of the classifiers (WE), and Voting based Candidate Filtering (VCF). Experimental results demonstrate that our methods give significantly higher accuracy than all of the traditional ones. Especially, WE provides significantly superior results compared to Max Wins which is recognized as the state of the art algorithm in terms of both accuracy and classification speed with two times faster in average.
1309.2796
Decision Trees for Function Evaluation - Simultaneous Optimization of Worst and Expected Cost
cs.DS cs.AI cs.LG
In several applications of automatic diagnosis and active learning a central problem is the evaluation of a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. In general, the process of reading the value of a variable might involve some cost, computational or even a fee to be paid for the experiment required for obtaining the value. This cost should be taken into account when deciding the next variable to read. The goal is to design a strategy for evaluating the function incurring little cost (in the worst case or in expectation according to a prior distribution on the possible variables' assignments). Our algorithm builds a strategy (decision tree) which attains a logarithmic approxima- tion simultaneously for the expected and worst cost spent. This is best possible under the assumption that $P \neq NP.$
1309.2797
Revealing the intricate effect of collaboration on innovation
physics.soc-ph cs.DL cs.SI physics.data-an
We study the Japan and U.S. patent records of several decades to demonstrate the effect of collaboration on innovation. We find that statistically inventor teams slightly outperform solo inventors while company teams perform equally well as solo companies. By tracking the performance record of individual teams we find that inventor teams' performance generally degrades with more repeat collaborations. Though company teams' performance displays strongly bursty behavior, long-term collaboration does not significantly help innovation at all. To systematically study the effect of repeat collaboration, we define the repeat collaboration number of a team as the average number of collaborations over all the teammate pairs. We find that mild repeat collaboration improves the performance of Japanese inventor teams and U.S. company teams. Yet, excessive repeat collaboration does not significantly help innovation at both the inventor and company levels in both countries. To control for unobserved heterogeneity, we perform a detailed regression analysis and the results are consistent with our simple observations. The presented results reveal the intricate effect of collaboration on innovation, which may also be observed in other creative projects.
1309.2805
Containing epidemic outbreaks by message-passing techniques
physics.soc-ph cond-mat.dis-nn cs.SI q-bio.PE
The problem of targeted network immunization can be defined as the one of finding a subset of nodes in a network to immunize or vaccinate in order to minimize a tradeoff between the cost of vaccination and the final (stationary) expected infection under a given epidemic model. Although computing the expected infection is a hard computational problem, simple and efficient mean-field approximations have been put forward in the literature in recent years. The optimization problem can be recast into a constrained one in which the constraints enforce local mean-field equations describing the average stationary state of the epidemic process. For a wide class of epidemic models, including the susceptible-infected-removed and the susceptible-infected-susceptible models, we define a message-passing approach to network immunization that allows us to study the statistical properties of epidemic outbreaks in the presence of immunized nodes as well as to find (nearly) optimal immunization sets for a given choice of parameters and costs. The algorithm scales linearly with the size of the graph and it can be made efficient even on large networks. We compare its performance with topologically based heuristics, greedy methods, and simulated annealing.
1309.2819
Stochastic processes with random contexts: a characterization, and adaptive estimators for the transition probabilities
math.PR cs.IT math.IT math.ST stat.TH
This paper introduces the concept of random context representations for the transition probabilities of a finite-alphabet stochastic process. Processes with these representations generalize context tree processes (a.k.a. variable length Markov chains), and are proven to coincide with processes whose transition probabilities are almost surely continuous functions of the (infinite) past. This is similar to a classical result by Kalikow about continuous transition probabilities. Existence and uniqueness of a minimal random context representation are proven, and an estimator of the transition probabilities based on this representation is shown to have very good "pastwise adaptativity" properties. In particular, it achieves minimax performance, up to logarithmic factors, for binary renewal processes with bounded $2+\gamma$ moments.
1309.2827
Geometrical aspects of quantum walks on random two-dimensional structures
quant-ph cond-mat.stat-mech cs.IT math.IT
We study the transport properties of continuous-time quantum walks (CTQW) over finite two-dimensional structures with a given number of randomly placed bonds and with different aspect ratios (AR). Here, we focus on the transport from, say, the left side to the right side of the structure where absorbing sites are placed. We do so by analyzing the long-time average of the survival probability of CTQW. We compare the results to the classical continuous-time random walk case (CTRW). For small AR (landscape configurations) we observe only small differences between the quantum and the classical transport properties, i.e., roughly the same number of bonds is needed to facilitate the transport. However, with increasing AR (portrait configurations) a much larger number of bonds is needed in the CTQW case than in the CTRW case. While for CTRW the number of bonds needed decreases when going from small AR to large AR, for CTRW this number is large for small AR, has a minimum for the square configuration, and increases again for increasing AR. We corroborate our findings for large AR by showing that the corresponding quantum eigenstates are strongly localized in situations in which the transport is facilitated in the CTRW case.
1309.2848
High-dimensional cluster analysis with the Masked EM Algorithm
q-bio.QM cs.LG q-bio.NC stat.AP
Cluster analysis faces two problems in high dimensions: first, the `curse of dimensionality' that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. In many applications, only a small subset of features provide information about the cluster membership of any one data point, however this informative feature subset may not be the same for all data points. Here we introduce a `Masked EM' algorithm for fitting mixture of Gaussians models in such cases. We show that the algorithm performs close to optimally on simulated Gaussian data, and in an application of `spike sorting' of high channel-count neuronal recordings.
1309.2853
General Purpose Textual Sentiment Analysis and Emotion Detection Tools
cs.CL
Textual sentiment analysis and emotion detection consists in retrieving the sentiment or emotion carried by a text or document. This task can be useful in many domains: opinion mining, prediction, feedbacks, etc. However, building a general purpose tool for doing sentiment analysis and emotion detection raises a number of issues, theoretical issues like the dependence to the domain or to the language but also pratical issues like the emotion representation for interoperability. In this paper we present our sentiment/emotion analysis tools, the way we propose to circumvent the di culties and the applications they are used for.
1309.2870
Analytical Framework of LDGM-based Iterative Quantization with Decimation
cs.IT math.IT
While iterative quantizers based on low-density generator-matrix (LDGM) codes have been shown to be able to achieve near-ideal distortion performance with comparatively moderate block length and computational complexity requirements, their analysis remains difficult due to the presence of decimation steps. In this paper, considering the use of LDGM-based quantizers in a class of symmetric source coding problems, with the alphabet being either binary or non-binary, it is proved rigorously that, as long as the degree distribution satisfies certain conditions that can be evaluated with density evolution (DE), the belief propagation (BP) marginals used in the decimation step have vanishing mean-square error compared to the exact marginals when the block length and iteration count goes to infinity, which potentially allows near-ideal distortion performances to be achieved. This provides a sound theoretical basis for the degree distribution optimization methods previously proposed in the literature and already found to be effective in practice.
1309.2900
Mining for Spatially-Near Communities in Geo-Located Social Networks
cs.SI physics.soc-ph
Current approaches to community detection in social networks often ignore the spatial location of the nodes. In this paper, we look to extract spatially-near communities in a social network. We introduce a new metric to measure the quality of a community partition in a geolocated social networks called "spatially-near modularity" a value that increases based on aspects of the network structure but decreases based on the distance between nodes in the communities. We then look to find an optimal partition with respect to this measure - which should be an "ideal" community with respect to both social ties and geographic location. Though an NP-hard problem, we introduce two heuristic algorithms that attempt to maximize this measure and outperform non-geographic community finding by an order of magnitude. Applications to counter-terrorism are also discussed.
1309.2915
Randomized Quantization and Source Coding with Constrained Output Distribution
cs.IT math.IT
This paper studies fixed-rate randomized vector quantization under the constraint that the quantizer's output has a given fixed probability distribution. A general representation of randomized quantizers that includes the common models in the literature is introduced via appropriate mixtures of joint probability measures on the product of the source and reproduction alphabets. Using this representation and results from optimal transport theory, the existence of an optimal (minimum distortion) randomized quantizer having a given output distribution is shown under various conditions. For sources with densities and the mean square distortion measure, it is shown that this optimum can be attained by randomizing quantizers having convex codecells. For stationary and memoryless source and output distributions a rate-distortion theorem is proved, providing a single-letter expression for the optimum distortion in the limit of large block-lengths.
1309.2920
Evolutionary Information Diffusion over Social Networks
cs.GT cs.SI physics.soc-ph
Social networks have become ubiquitous in our daily life, as such it has attracted great research interests recently. A key challenge is that it is of extremely large-scale with tremendous information flow, creating the phenomenon of "Big Data". Under such a circumstance, understanding information diffusion over social networks has become an important research issue. Most of the existing works on information diffusion analysis are based on either network structure modeling or empirical approach with dataset mining. However, the information diffusion is also heavily influenced by network users' decisions, actions and their socio-economic connections, which is generally ignored in existing works. In this paper, we propose an evolutionary game theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we analyze the framework in uniform degree and non-uniform degree networks and derive the closed-form expressions of the evolutionary stable network states. Moreover, the information diffusion over two special networks, Erd\H{o}s-R\'enyi random network and the Barab\'asi-Albert scale-free network, are also highlighted. To verify our theoretical analysis, we conduct experiments by using both synthetic networks and real-world Facebook network, as well as real-world information spreading dataset of Twitter and Memetracker. Experiments shows that the proposed game theoretic framework is effective and practical in modeling the social network users' information forwarding behaviors.
1309.2963
A Scalable Heuristic for Viral Marketing Under the Tipping Model
cs.SI physics.soc-ph
In a "tipping" model, each node in a social network, representing an individual, adopts a property or behavior if a certain number of his incoming neighbors currently exhibit the same. In viral marketing, a key problem is to select an initial "seed" set from the network such that the entire network adopts any behavior given to the seed. Here we introduce a method for quickly finding seed sets that scales to very large networks. Our approach finds a set of nodes that guarantees spreading to the entire network under the tipping model. After experimentally evaluating 31 real-world networks, we found that our approach often finds seed sets that are several orders of magnitude smaller than the population size and outperform nodal centrality measures in most cases. In addition, our approach scales well - on a Friendster social network consisting of 5.6 million nodes and 28 million edges we found a seed set in under 3.6 hours. Our experiments also indicate that our algorithm provides small seed sets even if high-degree nodes are removed. Lastly, we find that highly clustered local neighborhoods, together with dense network-wide community structures, suppress a trend's ability to spread under the tipping model.
1309.3006
The Classification Accuracy of Multiple-Metric Learning Algorithm on Multi-Sensor Fusion
cs.CV
This paper focuses on two main issues; first one is the impact of Similarity Search to learning the training sample in metric space, and searching based on supervised learning classi-fication. In particular, four metrics space searching are based on spatial information that are introduced as the following; Cheby-shev Distance (CD); Bray Curtis Distance (BCD); Manhattan Distance (MD) and Euclidean Distance(ED) classifiers. The second issue investigates the performance of combination of mul-ti-sensor images on the supervised learning classification accura-cy. QuickBird multispectral data (MS) and panchromatic data (PAN) have been used in this study to demonstrate the enhance-ment and accuracy assessment of fused image over the original images. The supervised classification results of fusion image generated better than the MS did. QuickBird and the best results with ED classifier than the other did.
1309.3014
Hypercontractivity of spherical averages in Hamming space
math.PR cs.IT math.CO math.FA math.IT
Consider the linear space of functions on the binary hypercube and the linear operator $S_\delta$ acting by averaging a function over a Hamming sphere of radius $\delta n$ around every point. It is shown that this operator has a dimension-independent bound on the norm $L_p \to L_2$ with $p = 1+(1-2\delta)^2$. This result evidently parallels a classical estimate of Bonami and Gross for $L_p \to L_q$ norms for the operator of convolution with a Bernoulli noise. The estimate for $S_\delta$ is harder to obtain since the latter is neither a part of a semigroup, nor a tensor power. The result is shown by a detailed study of the eigenvalues of $S_\delta$ and $L_p\to L_2$ norms of the Fourier multiplier operators $\Pi_a$ with symbol equal to a characteristic function of the Hamming sphere of radius $a$ (in the notation common in boolean analysis $\Pi_a f=f^{=a}$, where $f^{=a}$ is a degree-$a$ component of function $f$). A sample application of the result is given: Any set $A\subset \FF_2^n$ with the property that $A+A$ contains a large portion of some Hamming sphere (counted with multiplicity) must have cardinality a constant multiple of $2^n$.
1309.3029
On the Chi square and higher-order Chi distances for approximating f-divergences
cs.IT math.IT
We report closed-form formula for calculating the Chi square and higher-order Chi distances between statistical distributions belonging to the same exponential family with affine natural space, and instantiate those formula for the Poisson and isotropic Gaussian families. We then describe an analytic formula for the $f$-divergences based on Taylor expansions and relying on an extended class of Chi-type distances.
1309.3039
How Relevant Are Chess Composition Conventions?
cs.AI
Composition conventions are guidelines used by human composers in composing chess problems. They are particularly significant in composition tournaments. Examples include, not having any check in the first move of the solution and not dressing up the board with unnecessary pieces. Conventions are often associated or even directly conflated with the overall aesthetics or beauty of a composition. Using an existing experimentally-validated computational aesthetics model for three-move mate problems, we analyzed sets of computer-generated compositions adhering to at least 2, 3 and 4 comparable conventions to test if simply conforming to more conventions had a positive effect on their aesthetics, as is generally believed by human composers. We found slight but statistically significant evidence that it does, but only to a point. We also analyzed human judge scores of 145 three-move mate problems composed by humans to see if they had any positive correlation with the computational aesthetic scores of those problems. We found that they did not. These seemingly conflicting findings suggest two main things. First, the right amount of adherence to composition conventions in a composition has a positive effect on its perceived aesthetics. Second, human judges either do not look at the same conventions related to aesthetics in the model used or emphasize others that have less to do with beauty as perceived by the majority of players, even though they may mistakenly consider their judgements beautiful in the traditional, non-esoteric sense. Human judges may also be relying significantly on personal tastes as we found no correlation between their individual scores either.
1309.3060
On SAT representations of XOR constraints
cs.CC cs.AI
We study the representation of systems S of linear equations over the two-element field (aka xor- or parity-constraints) via conjunctive normal forms F (boolean clause-sets). First we consider the problem of finding an "arc-consistent" representation ("AC"), meaning that unit-clause propagation will fix all forced assignments for all possible instantiations of the xor-variables. Our main negative result is that there is no polysize AC-representation in general. On the positive side we show that finding such an AC-representation is fixed-parameter tractable (fpt) in the number of equations. Then we turn to a stronger criterion of representation, namely propagation completeness ("PC") --- while AC only covers the variables of S, now all the variables in F (the variables in S plus auxiliary variables) are considered for PC. We show that the standard translation actually yields a PC representation for one equation, but fails so for two equations (in fact arbitrarily badly). We show that with a more intelligent translation we can also easily compute a translation to PC for two equations. We conjecture that computing a representation in PC is fpt in the number of equations.
1309.3103
Temporal Autoencoding Improves Generative Models of Time Series
stat.ML cs.LG
Restricted Boltzmann Machines (RBMs) are generative models which can learn useful representations from samples of a dataset in an unsupervised fashion. They have been widely employed as an unsupervised pre-training method in machine learning. RBMs have been modified to model time series in two main ways: The Temporal RBM stacks a number of RBMs laterally and introduces temporal dependencies between the hidden layer units; The Conditional RBM, on the other hand, considers past samples of the dataset as a conditional bias and learns a representation which takes these into account. Here we propose a new training method for both the TRBM and the CRBM, which enforces the dynamic structure of temporal datasets. We do so by treating the temporal models as denoising autoencoders, considering past frames of the dataset as corrupted versions of the present frame and minimizing the reconstruction error of the present data by the model. We call this approach Temporal Autoencoding. This leads to a significant improvement in the performance of both models in a filling-in-frames task across a number of datasets. The error reduction for motion capture data is 56\% for the CRBM and 80\% for the TRBM. Taking the posterior mean prediction instead of single samples further improves the model's estimates, decreasing the error by as much as 91\% for the CRBM on motion capture data. We also trained the model to perform forecasting on a large number of datasets and have found TA pretraining to consistently improve the performance of the forecasts. Furthermore, by looking at the prediction error across time, we can see that this improvement reflects a better representation of the dynamics of the data as opposed to a bias towards reconstructing the observed data on a short time scale.
1309.3117
Convex relaxations of structured matrix factorizations
cs.LG math.OC
We consider the factorization of a rectangular matrix $X $ into a positive linear combination of rank-one factors of the form $u v^\top$, where $u$ and $v$ belongs to certain sets $\mathcal{U}$ and $\mathcal{V}$, that may encode specific structures regarding the factors, such as positivity or sparsity. In this paper, we show that computing the optimal decomposition is equivalent to computing a certain gauge function of $X$ and we provide a detailed analysis of these gauge functions and their polars. Since these gauge functions are typically hard to compute, we present semi-definite relaxations and several algorithms that may recover approximate decompositions with approximation guarantees. We illustrate our results with simulations on finding decompositions with elements in $\{0,1\}$. As side contributions, we present a detailed analysis of variational quadratic representations of norms as well as a new iterative basis pursuit algorithm that can deal with inexact first-order oracles.
1309.3126
Distributed Business Processes - A Framework for Modeling and Execution
cs.MA cs.SE
Commercially available business process management systems (BPMS) still suffer to support organizations to enact their business processes in an effective and efficient way. Current BPMS, in general, are based on BPMN 2.0 and/or BPEL. It is well known, that these approaches have some restrictions according modeling and immediate transfer of the model into executable code. Recently, a method for modeling and execution of business processes, named subject-oriented business process management (S-BPM), gained attention. This methodology facilitates modeling of any business process using only five symbols and allows direct execution based on such models. Further on, this methodology has a strong theoretical and formal basis realizing distributed systems; any process is defined as a network of independent and distributed agents - i.e. instances of subjects - which coordinate work through the exchange of messages. In this work, we present a framework and a prototype based on off-the-shelf technologies as a possible realization of the S-BPM methodology. We can prove and demonstrate the principal architecture concept; these results should also stimulate a discussion about actual BPMS and its underlying concepts.
1309.3132
Combination of Multiple Bipartite Ranking for Web Content Quality Evaluation
cs.IR
Web content quality estimation is crucial to various web content processing applications. Our previous work applied Bagging + C4.5 to achive the best results on the ECML/PKDD Discovery Challenge 2010, which is the comibination of many point-wise rankinig models. In this paper, we combine multiple pair-wise bipartite ranking learner to solve the multi-partite ranking problems for the web quality estimation. In encoding stage, we present the ternary encoding and the binary coding extending each rank value to $L - 1$ (L is the number of the different ranking value). For the decoding, we discuss the combination of multiple ranking results from multiple bipartite ranking models with the predefined weighting and the adaptive weighting. The experiments on ECML/PKDD 2010 Discovery Challenge datasets show that \textit{binary coding} + \textit{predefined weighting} yields the highest performance in all four combinations and furthermore it is better than the best results reported in ECML/PKDD 2010 Discovery Challenge competition.
1309.3139
Exploiting Interference for Efficient Distributed Computation in Cluster-based Wireless Sensor Networks
cs.DC cs.IT math.IT
This invited paper presents some novel ideas on how to enhance the performance of consensus algorithms in distributed wireless sensor networks, when communication costs are considered. Of particular interest are consensus algorithms that exploit the broadcast property of the wireless channel to boost the performance in terms of convergence speeds. To this end, we propose a novel clustering based consensus algorithm that exploits interference for computation, while reducing the energy consumption in the network. The resulting optimization problem is a semidefinite program, which can be solved offline prior to system startup.
1309.3147
Improved Stability Design of Interconnected Distributed Generation Resources
cs.SY
This work provides a design method for achieving a specified level of stability for inverter-based interconnected distributed generation. The stability of parallel connected distributed energy resources determined from a linearized state-space model of the inverter dynamics that includes the admittance matrix of the interconnecting distribution lines. Each inverter uses a localized droop control scheme with the associated voltage and frequency measurements obtained through the application of an enhanced phase locked loop. Previous work on this topic has focused on single inverters connected to an infinite bus without modeling of delays from a phase locked loop implementation. This proposed method overcomes both of these limitations of previous research. A detailed large-signal simulation of a three-bus interconnected power system is analyzed under two different network admittance values. Results confirm the effectiveness of the proposed stability design method.